forum_id
stringlengths 9
20
| forum_title
stringlengths 3
179
| forum_authors
sequencelengths 0
82
| forum_abstract
stringlengths 1
3.52k
| forum_keywords
sequencelengths 1
29
| forum_decision
stringclasses 22
values | forum_pdf_url
stringlengths 39
50
| forum_url
stringlengths 41
52
| venue
stringclasses 46
values | year
stringdate 2013-01-01 00:00:00
2025-01-01 00:00:00
| reviews
sequence |
---|---|---|---|---|---|---|---|---|---|---|
H92-E4kFwbR | Composite Adversarial Training for Multiple Adversarial Perturbations and Beyond | [
"Xinyang Zhang",
"Zheng Zhang",
"Ting Wang"
] | One intriguing property of deep neural networks (DNNs) is their vulnerability to adversarial perturbations. Despite the plethora of work on defending against individual perturbation models, improving DNN robustness against the combinations of multiple perturbations is still fairly under-studied. In this paper, we propose \underline{c}omposite \underline{a}dversarial \underline{t}raining (CAT), a novel training method that flexibly integrates and optimizes multiple adversarial losses, leading to significant robustness improvement with respect to individual perturbations as well as their ``compositions''. Through empirical evaluation on benchmark datasets and models, we show that CAT outperforms existing adversarial training methods by large margins in defending against the compositions of pixel perturbations and spatial transformations, two major classes of adversarial perturbation models, while incurring limited impact on clean inputs. | [
"adversarial examples",
"deep learning",
"robustness"
] | Reject | https://openreview.net/pdf?id=H92-E4kFwbR | https://openreview.net/forum?id=H92-E4kFwbR | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"djnJkyyO020",
"-29JJRL0g_",
"QLYokYQG5xQ",
"YJq7zb_hMJS",
"5UU6PthcRn",
"wA3w_pMvSTs",
"xWSQ9IJy5M1",
"AVCMFSvCEIr",
"eR_h1NmfbV8"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040418788,
1606265663837,
1606250774064,
1606191638658,
1606094812021,
1604083967070,
1603893335149,
1603886873780,
1603866923201
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3766/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3766/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3766/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3766/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3766/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3766/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3766/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3766/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"I thank authors and reviewers for discussions. Reviewers found the paper (specially the CAT-r method proposed in the rebuttal period) interesting but there are some remaining concerns about the significance of the results and experiments. Given all, I think the paper still needs a bit of more work before being accepted. I encourage authors to address comments raised by the reviewers to improve their paper.\\n\\n- AC\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We appreciate comments and concerns from the reviewer.\\n\\nWe've made a few modifications to the draft, with additional results and formatting fix-ups. Below we respond to the concerns and comments from the reviewer. \\n\\nC1 - performance & clean accuracy: First, we acknowledge that the CAT does not always perform better than PGD, AVG, MAX, and MSD, especially for individual perturbation types. We will fix our tone about individual perturbations systematic in the next iteration or camera-ready version. Second, as for clean accuracy, we proposed a lightweight trick called CAT-r (r for replacement) to make the final model a better trade-off between clean accuracy and robust accuracy. It improves results on three of all four settings in the paper.\", \"c2___fairness_of_the_comparison\": \"This is a good point from the reviewer. We argue that CAT works differently from AVG, MAX, and MSD by using the composite adversarial attack. They might have different trade-offs in terms of clean accuracy and the \\\"union\\\" and the \\\"composite\\\" accuracy. Plus, the additional results in C1 indicate that CAT-r can achieve better clean accuracy and better robust accuracy than naive CAT.\\n\\nC3 - $\\\\alpha$ for baselines: No. $\\\\alpha$ is not used in training baseline methods. The point here is to measure each model's robust accuracy to varying $\\\\alpha$. We will make further edits to make the results more accessible.\", \"oc3\": \"As suggested by the reviewer, the results are bold now.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"We thank the reviewer for the encouraging feedback.\\n\\nWe've made modifications and additional experiments to improve our draft. Here we provide our responses to each concern and suggestion.\", \"p1___threat_model\": \"We add a short theoretic analysis in Appendix A.\", \"p2___larger_datasets\": \"We've added preliminary results for the CIFAR-100 dataset in Appendix D.3. The results show a similar trend as in the CIFAR-10 case. In the original version, we only performed experiments on MNIST and CIFAR10 since the baselines (MAV, AVG, and MSD) just consider these two datasets.\\n\\nP3 - Implementation of $\\\\ell_p$ attacks: We follow the default hyperparameters used in the Foolbox for these $\\\\ell_p$ attacks. We've added a description in the \\\"result\\\" part of Section 4.1 to tell readers how results of individual attacks are aggregated into each row in Table 1, 2.\", \"p4___equation_8\": \"We rework it as Equation 7 in the revised version.\", \"p5___presentation_of_results\": \"Following the reviewer's suggestion, we replenish the detail in the caption of Table 1, 2, 3, 4.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We first thank you for the detailed comments from the reviewer.\\n\\nWe've made modifications and additional experiments to improve our draft. Here we provide responses that try to answer and address each comment.\", \"w1\": \"Regarding drops in clean accuracy, we proposed a lightweight trick called CAT-r (r for replacement) to make the final model a better trade-off between clean accuracy and robust accuracy. It improves results on three of all four settings in the paper. Besides, in Appendix D.3, we supply partial results for the CIFAR-100 dataset. Table 11 and Table 12 show that CAT, MAX, AVG has a similar impact on the clean accuracy even on a complicated dataset. The worse clean accuracy of CAT, MAX, AVG compared to $\\\\ell_p$ PGD models may cause by the inherent challenges in achieving robustness for diverse adversaries.\", \"w2\": \"CAT indeed fails to outperform MSD in some cases. However, we argue that the point of CAT is to achieve robustness in both the previously defined union threat model as well as the new composite threat model. The missing of MSD in Section 4.2 is that MSD requires the perturbations of all threats to be expressed in the same space. While this is straightforward in $\\\\ell_p$ cases, it is hard to unify spatial and pixel perturbation together in the same space.\", \"w3\": \"We add Table 12 in Appendix D.4 to show CAT's computational cost and baseline methods. The comparison is made under the same batch size and the same total number of epochs. Therefore, the result indicates that the CAT is fast.\", \"minor1\": \"we introduce how rows and columns in Table 1, 2, 3, 4 work in the caption of each Table.\\n\\nThough it is a bit late, we're looking forward to hearing more comments and suggestions from the reviewer.\"}",
"{\"title\": \"Summary of Revision\", \"comment\": \"We are grateful for the encouraging and insightful comments from reviewers. We've made a few modifications to the draft. Here we summarize the changes we made:\\n\\n1) Efforts on improving clean accuracy\\n In Section 3.2. We add CAT-r, a lightweight trick to fulfill a better trade-off between clean accuracy and robust accuracy against the union and composite adversary. Its results are appended to the last column of Table 1, 2, 3, 4. Now the clean accuracies are closer to MAX/AVG and MSD. Furthermore, the performances are still better MAX/MSD/AVG for Table 2, 3, 4. It is quite close to the MSD in Table 1. \\n\\n2) A new CIFAR-100 experiments\\n In Appendix D.3, we provide additional results on the CIFAR-100 dataset. The results show a similar trend as Table 1, 2, 3, 4. There is a large accuracy gap between MSD/AVG/CAT to clean model. The gap between CAT to MAX/AVG is less than 10%, as in other tables, which indicates that our CAT does not scale worse concerning dataset size. Therefore, we believe there might be an inherent trade-off between adversarial robustness for multiple perturbation and clean accuracy. \\n\\n3) Computation efficiency comparison\\n In Appendix D.4, we compare the computational efficiency of CAT to MAX, AVG, and MAX. The results show CAT is much faster and is able to work with a larger dataset. \\n\\n4) Justifications for composite attack\\n In Appendix A, we analyze the strength of union and composite attacks under a simple setting.\\n\\n5) Formatting & Descriptions\\n - Marks attack and robust models in the caption of Table 1, 2, 3, 4.\\n - In Section 4.1, we add how each p norm's attack results are aggregated and what individual perturbations contribute to the result.\\n - We bold the best results in Table 1, 2, 3, 4. \\n - Equation 8 in original version is reworked as Equation 7 in the revised version. \\n\\nWe'll post our responses to each reviewer right away.\"}",
"{\"title\": \"[Official Review]\", \"review\": \"#### Summary ####\\nThis paper tackles the problem of adversarial training for the image classification task. It proposed a novel adversarial training method called composite adversarial training (CAT) against combined attacks constructed by multiple perturbations. First, CAT is based on the composite adversarial attacks, in which the attackers explore different sources of perturbations. Second, CAT leverages the composite adversarial attacks as the inner loop for optimization during the training. The experimental evaluations have been focused on comparing the proposed CAT with existing robust training methods including adversarial training with PGD attacks, AVG, MAX (Tramer and Boneh, 2019), and MSD (Maini et al. 2020) on MNIST and CIFAR-10 classification benchmarks.\\n\\n#### Comments ####\\nThis paper studies an important problem in adversarial machine learning. The paper is well-motivated with novel technical contributions (Section 3.1) supported by reasonably designed experiments. However, reviewer feels the submission in the current form is a borderline case mainly due to mixed or inconclusive experimental results.\", \"w1\": \"The clean accuracy of CAT (Table 1 - 4, first row, last column) is significantly worse than methods such as AVG & MAX and MSD, especially on CIFAR-10 where the accuracy drops 20+% (I assume the state-of-the-art model has 90+% accuracy for the 10-way classification on CIFAR-10). This seems to be a major weakness of the proposed method. Reviewer understands the tradeoff between clean accuracy and accuracy under attack, but not sure how much value it is given the proposed defense method sacrifices too much on the clean accuracy. What makes it worse, this is just the performance drop of 10-way classification on CIFAR dataset. Reviewer is worried if this gap is even more significant on CIFAR-100 or ImageNet (w/ 1000 classes). It would be good to have some ablation studies.\", \"w2\": \"Besides the drop on clean accuracy, reviewer fails to see a clear winner between MSD and CAT (see the last two columns in Table 1 and Table 2). CAT seems to be more robust to composite attacks but not as robust as MSD on other attacks. Such comparisons are missing in Section 4.2 (pixel perturbation and spatial transformations). It would be good to comment on this.\", \"w3\": \"It would be good to report the computational cost (e.g., number of iterations in optimization, training time) of the proposed composite training method and explain how it is compared to the existing methods.\\n\\nMinor1 (applied for all the tables): it would be good to mention each row is a different attack method and each column is a different defense (robust training) method. It is not crystal clear at the first glance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #4\", \"review\": [\"Summary\", \"This paper proposes adversarial training with a novel threat model. Specifically, the authors propose to compose multiple adversaries, such as the ones based on l_p norm and spatial transform, in a predefined order to create a strong adversary in the adversarial training. The paper empirically demonstrated that the composite adversary is effective against previous adversarial defense mechanisms. It also demonstrated that the proposed adversarial training can lead to the classifier robust against the composite attack as well as the individual or union of multiple adversaries.\", \"Pros\", \"The composite adversary seems to be novel and effective in terms of both adversarial attack and adversarial training.\", \"The paper is generally well-written and easy to read.\", \"The experiment results convey comprehensive evaluation and analysis. I especially enjoyed that it covers various attack scenarios, such as the ones with unseen attacks and composite attack with a random order, etc.\", \"Concerns & Suggestions\", \"It is not clear why the composited thread model can be stronger than individual or union attacks as claimed by the authors. If there are some theoretical justifications/proofs, it would be interesting to see such discussions (e.g., the composited attack consistently leads to higher classification loss (inner maximization of adversarial training objective)).\", \"Although I appreciate authors for their comprehensive experiments, the current results are based on fairly small and easy datasets and it would be still interesting to see the results on more complex datasets such as Cifar-100 or mini-ImageNet.\", \"It is unclear how exactly the l_p attacks are implemented. In Section 4.1, the authors mentioned various methods for l_p attacks, such as PGD, FGSN, C&W, DeepFool, Salt&Pepper, etc., but it is unclear how they are actually used in the experiments, for instance in Table 1 and 2.\", \"It would be clear if authors add constraints on total attack budget on Eq.(8)\", \"Table 5 & 6: Please clarify that the rows are the adversarially-trained models and columns are threats. It is confusing since rows and columns are different from the previous tables.\", \"--- post rebuttal update ----\", \"The authors successfully addressed my initial concerns regarding more analysis and experiments on a larger dataset. Therefore, I keep my rating weak accept.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nice contribution to adversarial training literature but with mixed results\", \"review\": \"The authors propose a method for dealing with *composite* adversarial attacks, which are defined as a sequence of perturbation operators each applying some constrained perturbation to the output of the previous operator. Their method models the composed adversarial examples $x^*$ as the sum of the unperturbed example with a series of perturbations $\\\\delta_i$ which maximize the estimator's loss. They compare their results to other existing adversarial training methods against multiple types of adversarial attacks.\", \"pros\": [\"Interesting idea, seems like a very natural continuation of existing work\", \"Good experimental design, results are reasonably thorough\", \"Some results are encouraging\"], \"cons\": [\"Explanation of method (CAT) is somewhat lacking. It's not clear to me exactly what their method does differently than the baselines explained in the background.\", \"Results are mixed with discussion focusing almost entirely on the positive parts. For example, CAT consistently performs significantly worse than baselines on \\\"clean accuracy\\\" and worse than one or more baselines on other singular attacks (see Tables 1,2,3,4).\", \"Results in section 5.2 lack explanation (i.e. what do the table columns/rows actually mean)\", \"Minor formatting issues\", \"Overall, I think the central problem that the authors are trying to solve is important and their work makes a reasonable contribution towards the solution. Despite the apparent mixed results, this paper should be a candidate for acceptance.\"], \"additional_comments_for_the_authors\": [\"It would be helpful to provide references for the definitions of \\\"robust accuracy\\\" and \\\"clean accuracy\\\"; I'm sure these are metrics that have been defined and used in prior work but this can sometimes make it difficult for outside readers to find where they are rigorously defined.\", \"As mentioned in the Cons, you should make it more clear what the reader should be looking for in the tables. Reading just by the accuracy scores, it seems like CAT often performs worse or about the same as baselines in multiple experiments.\", \"Table captions should be above, not below, the table. This particularly problematic with Table 4/Figure 4 where the Table caption looks like the title of Figure 4.\", \"As mentioned before, equation 8 does not (for me) satisfactorily explain what CAT actually does.\", \"In equation 8, $\\\\delta_i$ appears in the constraint but not in the expression; perhaps you meant to write:\", \"$$\", \"x' = \\\\underset{x^{(m)}}{\\\\arg\\\\max} \\\\ell (f_{\\\\theta}(x^{(m)} + \\\\delta_i,y)\", \"$$\", \"The distinction between the different indexing notations $x_i$ and $x^{(i)}$ is not always clear\", \"It's not clear what the notation means in Tables 5, 6, and 7 and how it relates to \\\"ordering\\\" of perturbations.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\n\\nThis paper proposed an interesting new form of adverserial attack (composite adversarial attack) as well as an algorithm to defend this form of attack (CAT). The new form of attack is constructed as a composition of different individual perturbation models including pixel perturbation and spatial transformations. The CAT is proposed to defend both individual attack and composite attack by penalising the maximum accuracy loss during the sequential generation process of a composite attack perturbation. Empirical experiments comparing the proposed algorithm under both individual and composite attacks are conducted on benchmark datasets against baseline methods. The proposed CAT outperformed the baselines under composite attacks. Further analysis and discussion on different variations of composite attack as well as CAT are also presented with possible future exploration directions.\", \"pros\": [\"The paper is well structured in general and easy to understand.\", \"The idea of composite attack is interesting and meaningful to the neural network adversarial attacking area.\", \"The proposed method improves the network robustness under composite attack.\", \"The detailed analysis on composite attacks is valuable.\"], \"cons\": [\"My main concern with the paper is the general performance of the proposed algorithm and the fairness of the comparison.\", \"While the paper claims outstanding performance on individual perturbation model attacks, it is not always true across the two dataset. And the proposed algorithm always presents a lower clean accuracy in most of the experiment settings by a relatively large margin. There seems to be a clear tradeoff between the clean accuracy and the robustness towards a more aggressive attack (composite attack). The result limits the strength of the algorithm.\", \"I am concerned about the fairness of the comparison against baseline methods like MAX/ AVG. Since the paper used pretrained models from previous work, MAX/AVG baseline models are trained based on Eq(2)(3) and evaluated under the composite attack. In this case, the underlying perturbation space considered in Eq(2)(3) is different from (smaller than) the one in composite attack. (E.g. true maximum perturbation will not never be considered when training these models)\", \"Another question is: what does alpha mean for baseline methods during training? Is alpha used to rescale the perturbation during baseline training or not? If not, then Figure 4 presents very limited information since alpha is an unfair information available to the proposed model. If yes, then isn\\u2019t the whole experiment a scaling version of the main results?\"], \"other_comments\": \"- I would move the introduction of spatial transformation perturbation to section 2 as it is part of the fundamentals. \\n- Some details of baselines in Appendix A should be moved to the main text to provide a more self-contained experiment section. E.g. how the baseline models are trained.\\n- It would be nice to bold the best performance number in the tables. \\n\\n\\n---------------------------------------\\npost-rebuttal\\n\\nI would like to thank the authors for their efforts to improve the methods and the draft. Part of my concerns was resolved. \\nFor clean accuracy, CAT-r did provide a better trade-off. However, it is improved after the submission deadline, it can't be counted into the original contribution in theory. \\nFor the concern that the comparison to the baseline presents unfairness as the proposed method was designed for the composite attack with a larger perturbation space, I think the author agrees with my point to some extend. \\nI decided to keep my original score deal to the remaining weakness in the paper.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
aYJr_Rt30p | Learning Representation in Colour Conversion | [
"Arash Akbarinia",
"Raquel Gil-Rodriguez",
"Alban Flachot",
"Matteo Toscani"
] | Colours can be represented in an infinite set of spaces highlighting distinct features. Here, we investigated the impact of colour spaces on the encoding capacity of a visual system that is subject to information compression, specifically variational autoencoders (VAEs) where bottlenecks are imposed. To this end, we propose a novel unsupervised task: colour space conversion (ColourConvNets). We trained several instances of VAEs whose input and output are in different colour spaces, e.g. from RGB to CIE L*a*b* (in total five colour spaces were examined). This allowed us to systematically study the influence of input-output colour spaces on the encoding efficiency and learnt representation. Our evaluations demonstrate that ColourConvNets with decorrelated output colour spaces produce higher quality images, also evident in pixel-wise low-level metrics such as colour difference ($\Delta E$), peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). We also assessed the ColourConvNets' capacity to reconstruct the global content in two downstream tasks: image classification (ImageNet) and scene segmentation (COCO). Our results show a 5-10% performance boost for decorrelating ColourConvNets with respect to the baseline network (whose input and output are RGB). Furthermore, we thoroughly analysed the finite embedding space of Vector Quantised VAEs with three different methods (single feature, hue shift and linear transformation). The interpretations reached with these techniques are in agreement suggesting that (i) luminance and chromatic information are encoded in separate embedding vectors, and (ii) the structure of the network's embedding space is determined by the output colour space. | [
"Color representation",
"VAE",
"Color space",
"Unsupervised learning"
] | Reject | https://openreview.net/pdf?id=aYJr_Rt30p | https://openreview.net/forum?id=aYJr_Rt30p | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"szh8G9B4EIW",
"-10C-5EmMNi",
"A2Hdd-G6mE5",
"GO7zKdic82x",
"7oTh550JlUS",
"1FdVdkkNqs",
"uQtXmjYkaIp"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040501837,
1605536477194,
1605535840353,
1605534956977,
1604270109253,
1603888793930,
1603869559811
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3765/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3765/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3765/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3765/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3765/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3765/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes a novel unsupervised task of colour conversion. In this respect, the task\\nbecomes more like a regression problem -- rather than autoencoding the decoder needs to reconstruct the pixels in a different color system. \\nWhile the idea is potentially interesting, there are fundamental problems with the paper:\\n\\n* The motivations of the paper are obscure, (understanding colour representation in complex visual systems? Learning better representations? Disentangling color related information from the rest)\\n* No analysis is provided to highlight what the novel objective is achieving\\n\\nThe answers of the authors to AnonReviewer1 are not very convincing. As AnonReviewer1 has pointed out, the mapping between the color spaces is typically a simple invertible map so any conclusion that the authors arrive about \\u2018substantial impact\\u2019 could be simply the artefact of the particular architecture choice. The other claim, that \\u2018the proposed framework is able to encompass additional constraints relevant in understanding why the considered representations could have emerged in the brain\\u2019 quite far fetched and speculative at best.\\n\\nThe authors have a point in their reply ii) to AnonReviewer1 but if the claim is about the particular color coding schemata, it would be natural to include simple experiments where some arbitrary 3x3 invertible mapping (e.g. rgb in spherical or cylindrical coordinates) next to other color schemata to make a stronger point.\\n\\nIn point iii) the authors refer to tasks without being very explicit about what the tasks are. Colorization is a known proxy pretext task for learning representations when the downstream classification task is not known a-priori. The paper would have been much more easy to motivate if the authors could demonstrate the merit of the proposed objective using a more extensive and careful representation learning evaluation methodology.\\n\\nIn light of the above points, I feel that the paper needs further iterations to be presented at ICLR.\"}",
"{\"title\": \"Improving the motivation and highlighting take home message\", \"comment\": \"Thank you for reviewing our article. We appreciate your constructive comments.\\n\\n### Motivation\\nThis was also pointed out to us by the first reviewer.\\nThus, we have revised the manuscript to make our motivation clear. A framework for a fair comparison of different colour spaces within deep autoencoders that face a bottleneck to transmit information.\\n\\n> The motivation of finding a better embedding space of colour is admirable, unfortunately, the analysis and methodology does not support the motivation.\\n\\nIn order to answer this research question, we aimed to design experiments that make the comparison across input-output colour spaces as fair as possible. Essentially, all factors are identical in the presented networks except the input-output colour spaces. We would highly appreciate it if you could please expand more on why you think the methodology and analysis fall short to study the stated motivation.\\n\\n### What does the article teach us?\\nWe believe the findings of this article show (i) the choice of colour spaces can make a substantial impact on the encoding capacity of a VAE (i.e. networks with a bottleneck), thus from a practical perspective, applications that use the status-quo RGB perhaps could benefit from using opponent colour spaces. Furthermore as pointed out by the first reviewer the proposed framework is able to encompass additional constraints relevant in understanding why the considered representations could have emerged in the brain. (ii) The analysis of embedding space (with three different techniques) reach the same conclusion that each embedding vector encodes a certain type of information (a specific colour or luminance). We find this very exciting that within a complex deep network human-interpretable features emerge. (iii) A VAE consists of three stages (encoder, embedding space and decoder). To the best of our knowledge, the contribution of each component to the network's task remains as an open question. Our results provide some evidence in the context of colour space conversion, the transformation occurs within the encoding stage. This perhaps triggers some ideas for other researchers conducted on VAEs with a different set of tasks.\\n\\n> The analysis of this paper does not teach us any additional knowledge.\\n\\nWe appreciate it if you would please let us know whether you find our findings inadequate or you believe they are not grounded.\\n\\nPlease see the revised manuscript where we have implemented these points (the largest changes are highlighted with blue colour). We would be grateful to receive more comments from you. Thanks a lot.\"}",
"{\"title\": \"Applied ColourConvnets\", \"comment\": \"Thank you for reviewing our article. We appreciate your constructive comments.\\n\\n> To address this, it may show some applications based on the proposed task/conclusions.\\n\\nThe potential applications for our framework could be any of those with standard VAEs. Essentially, by employing the proposed framework we showed that decorrelated colour spaces improve the encoding capacity of standard VAEs. The presented evaluation on high-level visual tasks could be considered as one example of such applications (i.e. an image-classification/scene-recognition embedding system with limited computational resources, thus requiring to perform under compressed images).\\n\\nA standard application for VAEs is image deblurring (e.g. this situation can arise with camera in motion). We simulated such a scenario by evaluating the ResNet50 accuracy with input images that are blurred with a Gaussian function. Similarly, we first passed the blurred images to our autoencoder (without any fine-tuning) and gave the output to ResNet 50. Here are the corresponding results:\\n\\n| \\u00a0| $\\\\sigma = 1.5$ | $\\\\sigma = 2.5$ | $\\\\sigma = 3.5$ |\\n|: ---- | :---: | :---: | :---: |\\n| Original images |0.57 | 0.37 | 0.22 |\\n| rgb2rgb (128x128) |0.64 | 0.53 | 0.40 |\\n| rgb2lab (128x128) |0.64 | 0.53 | 0.40 |\\n| rgb2rgb (8x128) |0.44 | 0.25 | 0.11 |\\n| rgb2lab (8x128) |0.54 | 0.40 | 0.24 |\\n\\nWe can observe that when images are blurred, an object classification network benefit from ColourConvnets. Although for embedding space of 128x128 rgb2rgb is as good as the rgb2lab, for a smaller embedding space (8x128) with a greater rate of compression, only the rgb2lab can outperform the original images.\", \"the_reduction_of_information_for_the_embedding_space_8x128_is_384_in_bits\": \"$\\\\frac{224 \\\\times 224 \\\\times 3 \\\\times 8} {56 \\\\times 56 \\\\times 1} = 384 $, where (the number 224 corresponds to the input image size, the number 3 denotes the number of colour channels, the number 8 corresponds to uint8 input images; the number 56 is the embedding space spatial size, the number 1 refers to the number of bits for vectors of embedding space with K=8).\\nGiven our manuscript including the supplementary materials is 21 pages, we have not included the Gaussian blur results in the revised version. However, if you think this would be interesting to many readers, we can include a corresponding figure.\\n\\nLast but not least, we believe our framework could potentially lead to applications of image compression applications. There have been some works (e.g. \\\"Lossy Image Compression with Compressive Autoencoders\\\" ICLR 2017) showing deep autoencoders might perform better than standard JPEG compression. However, incorporating our findings with such frameworks is outside of the scope of this article and we believe it can be addressed in future research.\\n\\n\\n### Minor issues:\\n1. We used state-of-the-art image classification (ResNet50) and scene segmentation (FPN) for high-level evaluation. Essentially, when these networks are input with the output of rgb2rgb ColourConvNet, they correspond to the state-of-the-art results. Thus, our baseline is always rgb2rgb of the same architecture. Nevertheless, ColourConvNets with embedding space of K=128 and D=128 obtain 70% top-1 accuracy on ImageNet and 60% IoU on COCO, on a par with the state-of-the-art on original images (it is worth considering that the reconstructed images are substantially compressed). \\nTo be precise, the pretrained ResNet50 we used obtains 73% accuracy on original images of ImageNet. The same network obtains 71% accuracy with the rgb2lab compressed images. The performance drop is only 2% for a reduction of about 55 in bits $\\\\frac{224 \\\\times 224 \\\\times 3 \\\\times 8} {56 \\\\times 56 \\\\times 7} \\\\approx 54.85 $.\\n2. Thank you. We have corrected the typo.\\n3. Thank you for pointing this out. In the original submission, we only referred to the appendix figure. This has been corrected. We also expanded the corresponding caption with more details.\\n\\nPlease see the revised manuscript where we have implemented these points (the largest changes are highlighted with blue colour). We would be grateful to receive more comments from you. Thanks a lot.\"}",
"{\"title\": \"Constructive Feedback in Details\", \"comment\": \"Thank you very much for reviewing our article. We highly appreciate your constructive and detailed comments. They were truly helpful in improving the quality of our manuscript.\\n\\n> However, this message (pointing out the advantages of opponent representations wrt more trivial non-opponent representations) is not clearly stated.\\n\\nIn the revised version, we have emphasised the advantages of opponent colour spaces with respect to their encoding capacity. We also have tried to better highlight the other contribution. The comparison of the VQ-VAE's internal colour representation across colour spaces reveals what each embedding vector has encoded and provides some cues on where the colour transformation might occur (in the encoder).\\n\\n> The biological meaningfulness of the selected constraints is not discussed either.\\n\\nIn the revised version, while explaining more on biological meaningfulness of the selected constraints, we tried to make sure it remains accessible to interested readers from other communities.\\n\\n## Major points\\n\\n> The goal of the paper (in my view, a fair comparison of different color spaces in a bottleneck context using an appropriate optimization tool) is not clearly stated.\\n\\nThank you for pointing out this to us. We have revised the abstract, introduction and Section 2 accordingly emphasising this point. We have stated clearly that the proposed framework allows for a fair comparison across colour spaces within a system faced with a constraint on its information flow.\\n\\n> Instead some sentences in the abtract and introduction of the paper suggests that color representation is learnt. [...] Therefore, strictly speaking, there is no \\\"pure color representation learnt\\\" in the inner representation of the autoencoder. [...]\\n\\nWe appreciate you raising this point.\\nThe input-output colour spaces are imposed on the network (in the revised version, we have mentioned this upfront in the second paragraph of the introduction to avoid any confusions). ColourConvNets learn to efficiently compress visual information and to perform the colour conversion. We completely agree that there is no \\u201cpure colour representation learnt\\u201d, and as you pointed out the learnt representation is spatio-chromatic. However, the results presented in section \\u201cInterpreting the Embedding Space\\u201d show embedding vectors are associated with certain colours and their removal results in the disappearance of certain colours. We quantified this with the linear modelling (Section 5.2), the error of this modelling suggests a large part of encoded information in each embedding vector is related to colour features independent of the spatial component (the linear model is applied to all pixels equally). But of course, you are completely right and the complete representation is spatio-chromatic (please find our response regarding Fig 8). We have attempted to clarify this in the revised version.\\n\\n> Another confusing description is talking about the \\\"correlation\\\" and \\\"decorrelation\\\" properties [...]\\n\\nThank you for your suggestion. In the revised version, we have moved the correlation-decorrelation terminology to the expanded \\\"performance advantage\\\" section. We found the suggested references very relevant and have incorporated them. Figure 2 was also revised accordingly.\\n\\n> \\u00a0[...] Mishkin et al., 2017\\n\\nThis is an important point, thank you for pointing this out. Mishkin et al. (2017) analysed AlexNet like architecture without any bottlenecks. We have discussed this in the revised performance advantage section.\\n\\n> Fig 8 and Fig. C.1\\n\\nFig 8 was created by sampling from the embedding space an example of spatial size 2x2 with all cells set to the same vector index (e.g. [[0,0], [0,0]] corresponding to vector 0). The resulting reconstructed image is an 8x8 image. Previously we had averaged over these pixels for visualisation purposes, thus they were 100% uniform. In the revised version, we have removed this averaging to avoid any confusion (although many of them still appear very uniform since the spatial variation is tiny).\\nThis is explained in Section 5.1 and a new figure for a different combination of vectors in the horizontal, diagonal and vertical direction has been added.\\n\\n### Minor points\\n\\n1. We added this statement in Section 2 and are looking forward to incorporating these constraints in our framework for future research (thanks a lot for the tip).\\n2. The correlation between L and M is 0.9997. In the manuscript we have rounded them in the second decimal, thus it has become 1.00. We have changed the equal sign to approximately equal to avoid any confusions.\\n3. We have added a figure (B.1) regarding the loss functions which shows convergence across networks are comparable.\\n\\nPlease see the revised manuscript where we have implemented these points (the largest changes are highlighted with blue colour). We would be grateful to receive more comments from you. Thanks a lot.\"}",
"{\"title\": \"Interesting assesment of color representations using autoencoders\", \"review\": \"Summary\\n-------------\\n \\nThe authors analyze the quality of known color representations (non-opponent and opponent) in terms of the quality of the reconstructed images when spatio-chromatic information is constrained by a bottleneck of a discrete (quantized) representation of reduced dimensionality. \\n \\nThe quantized representation and encoding-decoding transforms are defined by a loss function optimized through an autoencoding tool (in particular a Vector-Quantized Variational Autoencoder). The quality of the reconstructed images is measured in terms of (1) low-level similarity metrics (some of them with perceptual meaning), and (2) performance in higher-level tasks such as classification and segmentation. \\n \\nThe conclusion is that perceptually meaningful opponent representations such as DKL and CIELab are better suited to the imposed bottleneck as opposed to color representations related to retinal sensors such as RGB or LMS. \\n\\nGeneral opinion and recommendation\\n-----------------------------------------------------\\n\\nI think the metodology and findings are really interesting to understand why the brain may have developed the opponent \\nrepresentations that have better performance in the presented experiments. The use of an autoencoding tool to enforce the minimization of the loss suggests that comparison between the color representations is fair. \\n\\nHowever, this message (pointing out the advantages of opponent representations wrt more trivial non-opponent representations) is not clearly stated. The biological meaningfulness of the selected constraints is not discussed either. Presentation is confusing at many points (see specific list below), but this can be fixed. Therefore, I think the work should be accepted after proper clarifications and removing some misconceptions.\\n\\nMajor Points\\n----------------\\n\\nThe goal of the paper (in my view, a fair comparison of different color spaces in a bottleneck context using an appropriate optimization tool) is not clearly stated.\\n\\nInstead some sentences in the abtract and introduction of the paper suggests that color representation is learnt. For instance, in the abstract and intro it is said \\\"We propose a novel unsupervised task \\u2014colour conversion\\u2014 to explicitly examine the colour representation learnt by deep networks (referred to as ColourConvNets).\\\" and \\\"the structure of internal representation provides insights on how this [color] transformation is performed within a neural network\\\". At this point the reader may think that the proposed autoencoder will learn a specific color representation well suited for certain goal(s). However, the autoencoder is not learning color representations (they are imposed at input and output), the autoencoder is only imposing certain bottleneck and hence, it is a controlled way to assess the suitability of the considered color representations in constrained settings. In fact, spatial and chromatic parts of visual information are mixed in the vectors of the inner representation of the considered autoencoders. This (hard to interpret) mixture necessarily comes from the reduced dimension of the vectors. Therefore, strictly speaking, there is no \\\"pure color representation learnt\\\" in the inner representation of the autoencoder. This misunderstanding about \\\"learning a color representation\\\" when it is actually a spatio-chromatic representation imposed by the constraints and the selected input-output color spaces happened to me, and only disappeared at the end of page 3 and 4. This misunderstanding should be avoided in abstract and intro.\\n\\nAnother confusing description is talking about the \\\"correlation\\\" and \\\"decorrelation\\\" properties when presenting the considered spaces (in section 2.2) just after talking about learning efficient representations through the autoencoder. Mentions to the \\\"efficiency\\\" of color space through citations to [Buchsbaum83,Ruderman98,Lee01] should appear later in the discussion (not as early as in section 1.1).\\nReaders aware of Barlow's efficient coding hypothesis that leads to transforms that favour decorrelation and equalization\\n(which in color lead to PCA-like transforms [Buchsbaum83], and nonlinear equalizations that explain chromatic adaptation [Laparra12]) may wonder why the cost function did not include decorrelation or independence measures. I would suggest talking about the decorrelation properties only in the discusion (do not mention in section 1.1 and remove the left part of fig 2 -devoted to highlight correlation and decorrelation-, and make these points in an expanded \\\"performance advantage\\\" section). Actually, decorrelation and equalization properties of these color spaces has been measured in information theoretic units by Foster et al. 2008 and by Malo 2020. I think these references [Buchsbaum83,Ruderman98,Lee01,Foster08,Laparra12,Malo20] should be included in this decorrelation-equalization discussion.\\n\\nAnother confusing statement is the apparent contradiction between this statement \\\"the conversion of RGB images\\ninto other colour spaces yield to no performance improvement in ImageNet (Mishkin et al., 2017)\\\" and the interesting \\nfindings done in this work. It is important to stress that maybe results in (Mishkin et al., 2017) were not subject to big enough dimensionality constraints and then proper representation of color was not that relevant.\\n\\nIt is unclear how Fig 8 and Fig. C.1 were computed. To me this is really important to clarify the non-trivial mixture of spatial and chromatic information in the vectors. From the text, I guess, a single codevector was used to decode the image.\\nBut, given the dimension of the codevectors (bigger than 3), they encode not only chromatic information but also spatial information. Then, how is it possible to obtain uniform color (as in figs 8 and C.1) with no spatial variation from a single codevector? \\n\\nMinor Points\\n------------\\n\\n* it is important to stress that the advantage of the proposed metodology to compare color spaces is that additional constraints can be included (such as entropy, energy, wiring, etc...). This framework able to encompass additional constraints is relevant to understand why the considered representations could have emerged in the brain.\\n \\n* Correlation = 1 between L and M seems like too much. Is this correct?\\n\\n* The value of the loss function is comparable in the different cases after training? This would be necessary for a fair comparison, isnt it?\", \"references\": \"----------------\\n\\n[Foster08] D.H. Foster, I. Marin-Franch, and S.M.C. Nascimento. 2008. Coding efficiency of CIE color spaces. In Proc. 16th Color Imag. Conf. Soc. Imag. Sci. Tech., 285\\u2013288\\n\\n[Laparra12] V. Laparra, S. Jim\\u00e9nez, G. Camps and J. Malo. 2012. Nonlinearities and adaptation of color vision from sequential principal curves analysis. Neural Computation 24, 10 (2012), 2751\\u20132788\\n\\n[Malo20] J. Malo. Information Flow in Color Appearance Neural Networks. Accepted in Entropy Conference 2020 https://arxiv.org/abs/1912.12093\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"review 3765\", \"review\": \"This paper proposes to study an interesting problem of how color informaiton is structured in the variational autoencoders (VAEs). Several instances of VAEs are trained in an unsupervised manner to perform color space conversion. Both low-level and high-level evaluations are performed to study the local statistics and global content of converted images. Several interesting conclusions are drawn from the experiments that help interpret the encoding process of autoencoders.\\n\\nOverall, this paper studies an interesting problem and presents several insightful conclusions. My only concern is how significant the proposed method/task is, and how significant insights these conclusions could provide. To address this, it may show some applications based on the proposed task/conclusions.\\n\\n\\nI only have some minor issues.\\n\\n1. It could be better to include results of state of the art methods on, e.g., object classification and scene segmentation. This could further show the potential application of proposed method.\\n\\n2. encode -> encodes, Line 7, Page 2\\n\\n3. Figure 1 is not referred to in the main text. Besides, it could be better to prodive more details in the caption.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The motivation is unclear and non additional knowledge is given\", \"review\": \"The motivation for this paper is quite hard to understand. A VQ-VAE is directly applied to convert an image from one colour space to another one. However, the colour space transform is human-defined, usually involving linear and a few non-linear (like selecting the maximum value is HSV) procedures. In this case, the latent space of VQ-VAE should be collapsed into this simple equation easily. The analysis of this paper does not teach us any additional knowledge.\\nThe motivation of finding a better embedding space of colour is admirable, unfortunately, the analysis and methodology does not support the motivation.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
LvJ8hLSusrv | Gradient-based tuning of Hamiltonian Monte Carlo hyperparameters | [
"Andrew Campbell",
"Wenlong Chen",
"Vincent Stimper",
"José Miguel Hernández-Lobato",
"Yichuan Zhang"
] | Hamiltonian Monte Carlo (HMC) is one of the most successful sampling methods in machine learning. However, its performance is significantly affected by the choice of hyperparameter values, which require careful tuning. Existing approaches for automating this task either optimise a proxy for mixing speed or consider the HMC chain as an implicit variational distribution and optimize a tractable lower bound that is too loose to be useful in practice. Instead, we propose to optimize an objective that quantifies directly the speed of convergence to the target distribution. Our objective can be easily optimized using stochastic gradient descent. We evaluate our proposed method and compare to baselines on a variety of problems including synthetic 2D distributions, the posteriors of variational autoencoders and the Boltzmann distribution for molecular configurations of a 22 atom molecule. We find our method is competitive with or improves upon alternative baselines on all problems we consider. | [
"Hamiltonian Monte Carlo",
"HMC",
"MCMC",
"Variational Inference"
] | Reject | https://openreview.net/pdf?id=LvJ8hLSusrv | https://openreview.net/forum?id=LvJ8hLSusrv | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"56-zdW0ZRT",
"aWMQXN4Uhmb",
"tVP_6Ym9b3",
"5StFlE4KNFN",
"Is6gT1EKfAU",
"1DATVv1VmxS",
"ETWCGqkBGbc",
"RyIvJ8WSP-X",
"MdEYCRQ0wNb",
"Q-LXaPQ1Wjy",
"L9nl3U8Eh1",
"OXuCXTBvEqC",
"OdoXuwwPFBN",
"5fGzsvKcotc",
"UgKA9qNWfJw"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040428910,
1606164317190,
1606162483823,
1606044365288,
1605983021431,
1605975836957,
1605380671470,
1605380388608,
1605379984761,
1605379458168,
1605379266799,
1604144246113,
1603886370375,
1603737780902,
1603625419470
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3764/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3764/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3764/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3764/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3764/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3764/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3764/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3764/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3764/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3764/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3764/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3764/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3764/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3764/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes a tuning strategy for Hamiltonian Monte Carlo (HMC). The proposed algorithm optimizes a modified variational objective over the T step distribution of an HMC chain. The proposed scheme is evaluated experimentally.\\n\\nAll of the reviewers agreed that this is an important problem and that the proposed methods is promising. Unfortunately, reviewers had reservations about the empirical evaluation and the theoretical properties of the scheme. Because the evaluation of the scheme is primarily empirical, I cannot recommend acceptance of the paper in its current form.\\n\\nI agree with the following specific reviewer concerns. The proposed method does not come with any particular guarantees, and particularly no guarantees regarding the effect of dropping the entropy term and using an SKSD training scheme to compensate. While guarantees are not necessary for publication, the paper should make up for this with comprehensive and convincing experiments. I agree with R1 that more careful ablation studies on toy models are needed, if nothing else to reveal the strengths and weaknesses of the proposed approach. I would also recommend a more careful discussion about the computational cost of this method and how it can be fairly compared to baselines. I don't agree that \\\"deliberately wasteful\\\" experiments reveal much, especially if running more realistic experiments reduces the relative impact of the proposed method.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you for your response.\\n\\nRegarding the optimization of the initial distribution using the alpha-divergence, we provide a short summary here and refer the reviewer to https://arxiv.org/pdf/1511.03243.pdf for the full details.\", \"the_alpha_divergence_is_a_generalized_divergence_given_by\": \"$D_\\\\alpha[p||q]=\\\\frac{1}{\\\\alpha(1-\\\\alpha)}(1-\\\\int p(x)^\\\\alpha q(x)^{1-\\\\alpha} dx)$\\n\\nIt can be shown that in the limit as $\\\\alpha \\\\rightarrow 0$, $D_\\\\alpha[p||q] \\\\rightarrow KL(q||p)$ and also that as $\\\\alpha \\\\rightarrow 1$, $D_\\\\alpha[p||q] \\\\rightarrow KL(p||q)$. As an outline of how this is proven: first consider the case as $\\\\alpha \\\\rightarrow 0$ then take the limit inside the integral and expand $p^\\\\alpha =\\\\text{exp}(\\\\text{log}(p^\\\\alpha))$ using the series expansion for $\\\\text{exp}(x)$. Repeat this for $q(x)^{1-\\\\alpha}$ and taking the limit as $\\\\alpha \\\\rightarrow 0$ gets the result. The result for $\\\\alpha \\\\rightarrow 1$ can be proven by reparameterizing e.g. $\\\\gamma =1-\\\\alpha$ and using the previous result.\\n\\nTo optimize this divergence we write it in this form\\n\\n$D_\\\\alpha[p||q]=\\\\frac{1}{\\\\alpha(1-\\\\alpha)} -\\\\frac{1}{\\\\alpha(1-\\\\alpha)} \\\\int q(x) p(x)^\\\\alpha q(x)^{-\\\\alpha} dx=\\\\frac{1}{\\\\alpha(1-\\\\alpha)} - \\\\frac{1}{\\\\alpha(1-\\\\alpha)} E_{q(x)} [ (\\\\frac{p(x)}{q(x)})^\\\\alpha]$\\n\\nIf we only know $p(x)$ up to a normalizing constant $p(x) = \\\\frac{p*(x)}{Z}$ then the form becomes\\n\\n$D_\\\\alpha[p||q]=\\\\frac{1}{\\\\alpha(1-\\\\alpha)} - \\\\frac{1}{\\\\alpha(1-\\\\alpha)} \\\\frac{1}{Z^\\\\alpha} E_{q(x)}[(\\\\frac{p*(x)}{q(x)})^\\\\alpha]$\\n\\nWe then estimate the expectation using a Monte Carlo average with K samples from $q(x)$\\n\\n$D_\\\\alpha[p||q] \\\\approx \\\\frac{1}{\\\\alpha(1-\\\\alpha)} - \\\\frac{1}{\\\\alpha(1-\\\\alpha)} \\\\frac{1}{Z^\\\\alpha} \\\\frac{1}{K} \\\\sum_{k=1}^K (\\\\frac{p*(x_k)}{q(x_k)})^\\\\alpha$\\n\\nTo minimize this with respect to $q$ we see that we only need to maximise the second term\\n\\n$\\\\text{max} \\\\frac{1}{\\\\alpha(1-\\\\alpha)} \\\\frac{1}{Z^\\\\alpha} \\\\frac{1}{K} \\\\sum_{k=1}^K (\\\\frac{p*(x_k)}{q(x_k)})^\\\\alpha$\\n\\nEquivalently we can maximise the log of this\\n\\n$\\\\text{max} -\\\\text{log}(\\\\alpha(1-\\\\alpha)) - \\\\text{log}(Z^\\\\alpha)+\\\\text{log}(\\\\frac{1}{K} \\\\sum_{k=1}^K (\\\\frac{p*(x_k)}{q(x_k)})^\\\\alpha)$\\n\\nWe consider the limit as $\\\\alpha \\\\rightarrow 1$ from below and ignore the constant during optimization. This gives the objective we use for training.\\n\\n$\\\\text{max} \\\\quad \\\\text{log}(\\\\frac{1}{K} \\\\sum_{k=1}^K \\\\frac{p*(x_k)}{q(x_k)})$\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your response.\\n\\nWe agree with the reviewer that this new objective has many features that are worth further exploration. It would be enlightening in further work to investigate the U-turn behaviour of our trained HMC chains and perhaps combine with a NUTS-type strategy. However, we feel this type of analysis may be out of the scope of our current preliminary paper, introducing this objective and getting first results on relevant problems.\\n\\nAs for the motivation for our SKSD objective, we believe the intuition gained from the Gaussian example extends to more complex targets. In any case, if the initial distribution is too narrow and so the HMC chains cannot explore the full extent of the target, whether that be heavy tails or other modes, the SKSD will be large as it measures all discrepancy between the HMC sampling distribution and the target. Therefore, there will be a learning signal to increase the scaling in order to allow the HMC chains to fully explore the target regardless of the specific form/shape the target takes. Conversely, if the scale is too large and the HMC chains over sample from the tails, then there will also be a signal to decrease the scaling due to this discrepancy being represented in the SKSD.\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Thanks for addressing some concerns.\\n\\nWhat I have meant with a deterministic HMC proposal for the standard Gaussian target and step size \\\\sqrt(2) is that for a given initial position x_1, the final state of the Markov chain after K Metropolis-Hasting steps with 2 leapfrog-steps is completely deterministic (independent of the sampled velocities).\\nFor x_1 drawn from a non-deterministic initial distribution, the distribution of the position after K MH-steps will then also be non-deterministic, but will coincide with the initial distribution for even K.\\n\\nMy point here was that the entropy of the HMC proposal can influence the efficiency of exploring the state space, but the entropy terms are neglected in the proposed approach for tuning HMC parameters. I agree with the authors reply to another reviewer that computing the entropy of the one step HMC-proposal is non-trivial in contrast to the more tractable MALA case as done in Titsias and Dellaportas (2019). My concern is therefore more general if the proposed objective discourages inefficiencies such as U-turns that are somehow related to the entropy \\u2013 and combining it with NUTS-type strategies might be illustrating. \\n\\nThe motivation for the training objective of the initial distribution via the SKDS and its automatic adjustment to the initial distribution is still not clear to me in the general case (beyond the mentioned one-dimensional Gaussian example). What happens say for a Gaussian initial distribution if the target is multimodal or has heavy tails and adapting HMC seems more challenging?\"}",
"{\"title\": \"Follow-up regarding 2D experiments\", \"comment\": \"We have looked at the comparison of methods using the mean and covariance statistics within the 2D experiments, however, we believe that these metrics may not be very useful in this case. Many of the distributions we look at are highly non-Gaussian so we think that the use of mean and covariance data to compare them does not make that much sense. For the two targets that are close to Gaussian (the Gaussian and Laplace targets), we compared the mean vectors and covariance matrices between the samples and the ground truth from rejection sampling using the mean squared error (just using the upper triangular matrix for the covariance matrices):\", \"mean_mse\": \"| | Gaussian | Laplace |\\n| :------------- | :----------: | -----------: |\\n| $\\\\alpha=0$ | 1.18e-3 | 4.04e-5 |\\n| $\\\\alpha=1$ | 2.66e-3 | 2.08e-4 |\\n| SKSD & $\\\\alpha=0$ | 8.29e-6 | 5.64e-5 |\\n| SKSD & $\\\\alpha=1$ | 2.25e-5 | 5.00e-5 |\\n| Min $p=0.25$ | 6.85e-7 | 2.00e-6 |\\n| NUTS | 4.38e-4 | 1.23e-4|\\n\\nCovariance MSE\\n\\n| | Gaussian | Laplace |\\n| :------------- | :----------: | -----------: |\\n| $\\\\alpha=0$ | 1.17 | 9.33e-4 |\\n| $\\\\alpha=1$ | 8.29e-3 | 1.24e-2 |\\n| SKSD & $\\\\alpha=0$ | 1.05e-2 | 4.99e-4 |\\n| SKSD & $\\\\alpha=1$ | 2.12e-4 | 3.29e-3 |\\n| Min $p=0.25$ | 2.11e-2 | 5.36e-2 |\\n| NUTS | 2.84e-3 | 1.90e-3|\\n\\nWe found that our method does well at estimating the covariance information. For the mean estimation the baseline \\\"Min $p=0.25$\\\" does seem to do better, although in this case, both the baseline and our method have estimation errors that are very low, so we believe that the differences are not very meaningful here.\\n\\nOverall, we think that comparing methods using mean and variance metrics may not provide much useful information in regards to which methods are best on these 2D targets, especially when some of them are highly non-Gaussian.\"}",
"{\"title\": \"response to authors\", \"comment\": \"I have read the rebuttal and other reviews. Unfortunately, I don't think that my concerns are addressed, and, moreover, I don't see how they could possibly be addressed in a rebuttal or via minor changes in the paper.\\n\\n1. I'm greatly confused by the authors' response since I do not understand how one could directly optimize the forward KL-divergence $D_{\\\\text{KL}}(p||q)$ without sampling from $p$ (in the context where this KL divergence could not be evaluated analytically). At this point, I think the authors must clarify the method of optimization of the forward KL-divergence since it could greatly affect the approach.\\n2. In practice, people don't usually apply HMC at all costs, \\u201cGradient-based Adaptive Markov Chain Monte Carlo\\\" proposes a very relevant approach to yours since Langevin dynamics and HMC have much in common (you can consider Langevin dynamics as a single step HMC).\\n3. In contrast to your presentation, I consider the comparison against grid search to be weak rather than highlighting.\"}",
"{\"title\": \"Response to Reviewer 2 (part 2/2)\", \"comment\": \"We address the minor comments here.\\n\\nIt is not obvious that equation (6) minimizes the $\\\\alpha=1$ divergence, we refer the reviewer to our reference cited in the paper for an explanation for why this is the case. The talk cited is available online at https://www.youtube.com/watch?v=Ev-6s8b3QrI please refer to 18:10 to 20:35 for the explanation. For k=1 it is indeed the standard VAE objective so k does have to be greater than 1 for the comparison with the $\\\\alpha=1$ divergence to be valid.\\n\\nThe $\\\\gamma$ variables in section 3 are the sources of randomness from which the momentum variables will be derived. The function $f_\\\\phi$ encapsulates all transformations applied to these primitive random variables including the transformation that will transform the $\\\\gamma$ variables N(0, I) into the momentum variables N(0, diag(m)).\\n\\nAnother reviewer has also highlighted the choice of acceptance probability so we reproduce our response here to this concern.\\nThe target minimum acceptance rate was chosen to be 0.25 to be in line with the original work \\u201cLearning Deep Latent Gaussian Models with Markov Chain Monte Carlo, Hoffman 2017\\u201d. In this work, HMC was used targeting the posterior in deep latent gaussian models. During training batches of training samples are taken giving a batch of posteriors to target, $\\\\\\\\{p(z|x_n)\\\\\\\\}$. The stepsize is then chosen to keep the minimum acceptance probability for any given $x_n$ to be 0.25 in order to allow the worst case chain to still mix. We agree this choice is rather strange when applied to this experiment with fixed targets, however, this was done to ensure consistency between the 2D experiments and DLGM experiments. The inclusion of the NUTS baseline ensures there is still a challenging SOTA method to compare to for this experiment.\"}",
"{\"title\": \"Response to Reviewer 2 (part 1/2)\", \"comment\": \"We thank the reviewer for their detailed feedback and constructive comments. We address the concerns below.\\n\\nRegarding the example given of HMC applied to a standard normal target. We would like to clarify what the reviewer means by the method proposing deterministically from a point mass target. From our calculations, we find that if the chain begins at position and momentum $x_1$ and $\\\\nu_1$ then after one step the position will be $\\\\sqrt{2} \\\\nu_1$ and after a second step the position will be $-x_1$. Since both the initial momentum and position are drawn from initial distributions, they are random so across many parallel chains, we will not be drawing from a point mass distribution.\\nWe are unable to provide a proof that HMC can never sample from a point mass distribution but we conjecture that this is highly unlikely on practical problems.\\n\\nAs for why the SKSD automatically adjusts the initial distribution width even though it is the discrepancy between the final state distribution and the target, we refer back to Figure 1. Consider the case where we use initial distribution 1 (pink) with a scale of 1. After training, all step sizes will be set to 0 because the optimizer has recognized that the expected log target can be maximised by staying at the initial sample points because the initial distribution is narrow and concentrated on the target mode. The final state distribution in this case will be equal to the initial distribution and we note that it is quite different to the target hence the SKSD will be high. If the scale were now to be increased to 4 then effectively we would be starting with initial distribution 2 (green). Now training will effectively optimize the step sizes resulting in a final state distribution that is very close to the target giving a very low SKSD. Therefore, the SKSD applied to the final state distribution provides a learning signal to increase the scale of the initial distribution because the SKSD can be decreased by increasing the initial scale. We hope this has alleviated the reviewer\\u2019s concerns for this section, we will add clarifying text here in an updated version of the paper.\\n\\nRegarding other bounds that have been used by other works such as Salimans et al., 2015 and Thin et al., 2020, what we mean by they are too loose in practice is that the bounds get looser as you add more steps in the chain prohibiting use on reasonably sized chains. They generally have a form of being equal to the standard ELBO subtract a KL term between a true and approximate distribution on variables relating to all the previous states in the chain. As the chain gets longer, the number of these variables increases making the KL term larger in general. This would then cause problems in optimization since the size of this term relative to the standard ELBO increases meaning the model just learns to fit to the approximate reverse distribution as opposed to fitting to the target as desired. Indeed Salimans et al., 2015 consider only very short chains (only 1 step for their DLGM experiment) whereas we apply our method to chains with 30 or 50 accept/reject steps. We will add text clarifying this point to the paper.\\n\\nWhen evaluating our method, we found acceptance rates to consistently stay uniformly quite high (near 1) when applied to different targets. The ability to explore comes from the granularity of control afforded to the model as the stepsizes can be tuned on a per-dimension and per-accept/reject step level as opposed to one uniform constant. Regarding an extra incentive to hit a certain target acceptance probability, in this paper we just wanted to focus on an objective inspired by variational inference without needing to use rules of thumb for targeted acceptance rates though it may be indeed an interesting further direction to look at combining our objective with a targeted acceptance rate objective.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the reviewer for their detailed feedback and constructive comments. We address the concerns below.\", \"major_concerns\": \"1) We agree with the reviewer that our method is heuristic based, however, we are targeting the parallel MCMC use-case where many independent samples are taken from the ends of short parallel MCMC chains. Since the chains are short, very few guarantees can be given about convergence to the true target and indeed in this regime we need to use engineering methods such as well chosen initial distributions and tuned parameters to get good results. Our method is not applicable to the long chain case where such guarantees may be given. As for the reviewer\\u2019s second point regarding mass covering objectives, we agree that this is a hard problem but we must clarify that the alpha-divergence objective we use does not require any samples from the true target distribution - it only requires the target density. The wording in the paper may be confusing as we also mention training via maximum likelihood using target samples (if they are available) which is also a mass-covering objective but this is distinct to the $\\\\alpha=1$ divergence objective which is mass-covering and doesn\\u2019t require target samples. We will clarify this in an update to the paper.\\n\\n2) Regarding the paper \\u201cGradient-based Adaptive Markov Chain Monte Carlo, Titsias 2019\\u201d we acknowledge that a reference to this work should have been made in the paper, we will add this in a new version. However, we believe it is not directly comparable to our method because the method proposed by Titsias cannot be applied to HMC in its current form as it requires the entropy of the markov one step proposal $p(x_t | x_{t-1})$ which would be intractable for HMC. Indeed, they only use their tuning method on random walk metropolis and metropolis adjusted langevin samplers.\\n\\n3) Regarding the empirical results on the DLGMs, we agree that the method may not be useful for computer vision tasks in practice, however, this example was to show that our method can be competitive with state of the art methods for training DLGMs on complex data of which images is a good example rather than showing good performance on computer vision specifically. Our method could be used on other modalities in practice but many similar methods have evaluated on MNIST so we felt this was a good application to benchmark on. As for the experiment on molecular configurations, we would like to highlight our result that we can improve over grid searching for HMC parameters when using an alpha-divergence initial distribution. We believe improving over this brute force approach is a significant result regarding the usefulness of our method. We admit that the metric used is non-ideal but this is due to the high level of difficulty of this problem and this level of difficulty is precisely why we evaluate on this problem.\", \"minor_comments\": \"1) As we mentioned previously, the level of difficulty of this problem prevents the use of other measures of performance such as the KLD of the overall distribution due to the high dimensionality. Furthermore, the marginals of the Boltzmann distribution of proteins, especially those of the dihedral angles, are of high importance for the analysis of their properties such as how they fold, so our performance measure is of practical relevance. We will update section 5.3 in this regard. Would it be possible for the reviewer to suggest other methods that could be used to evaluate performance on this problem?\\n\\n2) We will update this introduction to the molecular configurations section with additional detail in an update to the paper.\\n\\n3) In section 4 we introduce the scaling factor, s, which does not need to be chosen by the user since it is automatically updated using the SKSD objective. The variable mu is the mean of the samples from the initial distribution (which is known when the initial distribution is a Gaussian but can be estimated easily if the initial distribution is a normalizing flow). We wonder if this has clarified the reviewer\\u2019s concerns for this section?\\n\\nWe would like to thank the reviewer again for their encouragement for future work. Please let us know if there are any more concerns.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We would like to thank the reviewer for their comments and analysis of our paper. We address the concerns below.\\n\\nWe acknowledge that there should have been discussion of the training time results given in the paper, we will add this text in an update to the paper. We would also like to take the chance now to explain why we ended up with these training time results. In the DLGM experiment, we were aiming at showing clearly the benefit that SKSD training can bring in terms of log-likelihood and we were not optimizing for efficiency of training. Specifically, for the case where we do not use the SKSD, for each optimization step we took only one sample from the HMC chain with which to estimate the expected log target (equation 3). For the case where we do use the SKSD, we take one HMC sample for the expected log target but then also take 30 samples to estimate the SKSD with. This will obviously take a lot longer and is an inefficient use of resources. It would be optimal to take one batch of samples e.g. 30 and then estimate both the SKSD and the expected log target with that same batch. However, if we had done that in our experiments then the variance in the expected log target estimator would have been a lot less and thus it would have been unclear if our improved results were due to the SKSD training or just due to the reduced variance in the expected log target. So in summary, our experiment using the SKSD was deliberately wasteful to make sure the benefit of the SKSD is clear. \\nIndeed, in our molecular configurations experiment, we do use the same batch of samples to estimate the expected log target and the SKSD and we found that the training times are practically equivalent whether or not the SKSD term is included. We also find in the molecular configurations experiment that including the SKSD term improves performance showing that the model can be improved by adding the SKSD without increasing training time as long as the same batch of samples is used.\\n\\nRegarding section B.1, we will update the paper, explaining more clearly when we use each objective.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for their detailed feedback and constructive comments. We address the concerns and answer questions below.\\n\\n1) We agree with the reviewer that the choice of a vector epsilon is non-standard and thus this choice should have been better explained during the introductory section of the paper. We do indeed make this choice as to allow the model more flexibility in adapting the leapfrog scheme to each specific target. We will add text clarifying this to the paper.\\n\\n2) Combining our method with NUTS would be an interesting piece of future work, however, this may be out of the scope of our current paper where we attempt to provide the groundwork and initial evaluations of this new objective. With regards to our NUTS implementation, we used Algorithm 6 from \\u201cThe No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo, Hoffman & Gelman 2011\\u201d which uses dual averaging to tune the stepsize. We will clarify that this is the case within the paper.\\n\\n3) Regarding comparing the mean and covariance for the 2D toy experiments, we agree that these can also be useful for assessing varying levels of convergence to the target. However, in the main text of the paper we decided to use the kernelized stein discrepancy as this should encapsulate all discrepancy between the true distribution and sampling distribution in one number. The variance information may indeed give insightful information about any discrepancy in spread between these distributions, we can include an extra table comparing using this metric in the appendix.\\n\\n4) The target minimum acceptance rate was chosen to be 0.25 to be in line with the original work \\u201cLearning Deep Latent Gaussian Models with Markov Chain Monte Carlo, Hoffman 2017\\u201d. In this work, HMC was used targeting the posterior in deep latent Gaussian models. During training batches of training samples are taken giving a batch of posteriors to target, $\\\\\\\\{p(z|x_n)\\\\\\\\}$. The stepsize is then chosen to keep the minimum acceptance probability for any given $x_n$ to be 0.25 in order to allow the worst case chain to still mix. We agree this choice is rather strange when applied to this experiment with fixed targets, however, this was done to ensure consistency between the 2D experiments and DLGM experiments. The inclusion of the NUTS baseline ensures there is still a challenging SOTA method to compare to for this experiment.\\n\\n5) We would like to ask the reviewer for clarification on this point as we believe the usual ESS metric for MCMC does not apply to our method as we take samples from the ends of many short parallel chains as opposed to samples from within one long chain. Therefore, each of the samples from our method is independent and so the ESS metric would not make sense in this case.\\n\\n6) In our paper, we chose challenging problems in order to stress-test the method on real-world problems from ML. We acknowledge that this may be less illuminating as to the properties of our proposed method but this does provide assurances that our method is not limited to only toy/synthetic targets. The reviewer does mention some problems that may be in between these extremes, would it be possible for the reviewer to provide some examples of problems of this type that they have in mind?\"}",
"{\"title\": \"interesting line of work:numerical studies are not entirely convincing\", \"review\": \"Summary:\\n========\\n\\nthe article proposes to tune an HMC sampler by maximising E_\\\\param[\\\\log target(X_T)] over the parameters of the HMC sampler. Furthermore, the article studies the influence of the initial distribution. While the approach is certainly interesting, I have not found the empirical studies satisfying enough.\", \"comments\": \"=========\\n1. The article considers a vector \\\\epsilon as well as a mass matrix. Usually, the parameter epsilon is chosen as a scalar number: choosing epsilon as a vector can indeed also be seen as a particular type of preconditioning (or choice of mass matrix). I have found this part of the paper not extremely well explained.\\n\\n2. It is indeed also difficult to choose L, and that is mainly what the no-U-turn method tries to automate. In practice, dynamically adapting L can make a lot of difference in high-dimensional settings and/or different parts of the state space exhibit different scales. It would have been very interesting to investigate how the proposed method can be used **in conjunction with** no-U-turn type strategies. Furthermore, it was not entirely clear to me how the \\\\epsilon was tuned when the no-U-turn was used.\\n\\n3. In the 2D example, since the authors have used rejection sampling to produce the plots, it is also easy to accurately estimate the mean/covariance of the target distribution. It would have been interesting to use these statistics [although, it is not possible to do so in more complex scenarios] and see if this leads to improved performances.\\n\\n4. in the \\\"\\\\min \\\\bar{p}\\\" method, why choose a target acceptance rate of 0.25? My experience says that the number is usually chosen much higher.\\n\\n5. While reporting the KSD, I think it would have been very interesting to report the ESS [or variations of it], since it is the standard measure of efficiency in the MCMC literature.\\n\\n6. Finally, while the 2D examples are certainly very interesting, I am not convinced that directly going from 2D to super-difficult-target is the right approach to understand the properties of the proposed methods. There are many settings that are more difficult than these 2D distributions, but much more tractable than the DLG/molecular targets.\\n\\nIn summary, I think that the authors are proposing an interesting line of research, but more careful numerical investigations are necessary to really understand the worth of the methodology.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The paper is well-written, and the authors do an excellent job of articulating the problem and motivate their idea well; I do have some reservations. I vote for a weak accept.\", \"review\": \"### Summary:\\n\\nThis paper proposes a variational inference based framework to tune some of the hyper-parameters of HMC algorithms automatically. The authors drop the entropy term from regular ELBO formulation, which facilitates a gradient-based approach. However, dropping this term requires extra care for which authors offer an automated-method. Finally, the authors demonstrate empirical validation on several problems.\\n\\n### Strength:\\n\\nThe authors do an excellent job of articulating their intuition behind the idea both (see section 3.) While dropping the entropy term from ELBO decomposition is heuristic-based, the explanations are well-formulated, and Figure 1 does an excellent job of getting the point across. \\n\\nMore so, since dropping the entropy term can cause pathological behaviors, the authors propose a method to ensure wider initial distributions. I commend the authors for the non-trivial engineering that was required to make their ideas work. I also, commend the author's effort of conducting statistical tests and extensive empirical evaluations. \\n\\n### Concerns:\\n\\nMy main concern with the work is that it is often on-par with the competing methods--I understand that a new method doesn't need to be SOTA on every benchmark--and the SKSD enabled variants that achieve this performance are prohibitively slow (see Tables 6 and 9.) I could not help but feel concerned when no discussion was offered for an almost tenfold increase in the computational time for training DLGMs. To convince me, I will suggest offering an honest discussion on the run-times of the approaches.\\n\\nI find the discussion in section B.1important, and believe it should be more formal. Specifically, I will suggest algorithimizing what objective is used at which stage. Alternatively, authors can choose to restructure this some other way; however, it is too important to be left in its current form. \\n\\n### Updates after the rebuttal\\n\\nI like the paper and found the revised version more transparent. I support the engineering approach of the paper; however, as we all know, these papers often require authors to go to greater lengths to convince. After reading the other discussion and reviews, I think the authors can consider a few additional experiments. I would suggest investing in a more involved toy-experiment to better motivate the engineering solutions. If possible, authors can also consider a more careful ablation study to establish the relevance of each component on this toy-model. Further, the authors offered explanations for the training time aberrations; if possible, authors can consider including the equally-fast-variants in the revision to be more convincing.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"an engineering trick with limited practical significance\", \"review\": \"The paper proposes a method to optimize the parameters of the Hybrid Monte Carlo (HMC) algorithm (the step size and the diagonal of momentum's covariance matrix). In order to do that, the authors consider the distribution of samples q_T() obtained after T iterations of the algorithm (T accept/reject steps) starting from some distribution q_0(). Then, a reasonable objective for the optimization would be the KL-divergence between q_T() and the target density p(). However, the evaluation of the KL-divergence includes the entropy of q_T(), whose density is intractable due to numerous accept/reject steps. The proposed solution to this difficulty is to ignore the entropy term and maximize the log density of the target on samples from q_T(). To avoid the degenerate solution (due to ignorance of the entropy), the authors propose to choose q_0() carefully, e.g., to learn q_0() as a normalizing flow approximating the target p() via minimization of mass-covering alpha-divergence. The latter involves the usage of samples from the target distribution.\", \"major_concerns\": \"1. The method is an engineering trick rather than a grounded approach to the optimization of sampling algorithms. Indeed, in many cases, people use MCMC methods to obtain guarantees for the sampling procedure. The proposed method removes all these guarantees by relying on the choice of the initial distribution q_0(). Moreover, the optimization of q_0() via mass-covering objectives is a notoriously hard problem since samples from the target distribution are not given in a usual setting.\\n\\n2. I think the paper lacks an essential comparison with the method proposed by Titsias (Gradient-based Adaptive Markov Chain Monte Carlo, 2019). This paper proposes a more general objective for parameter optimization explicitly fostering high entropy of the proposal. Moreover, in contrast with the learning step of q_0(), it operates in an adaptive manner, not requiring any pretraining steps.\\n\\n3. Given the limited theoretical novelty, I would expect the ICLR paper to demonstrate highly successful empirical results. However, it is not the case for the current submission. I'm quite confident that the results on CV tasks are out of practical interest. Also, for the molecular dynamics, the metrics' choice hinders the assessment of the practical significance.\", \"minor_comments\": \"1. I don't find the comparison of marginal distributions on the 60d problem to be a convincing way to compare samplers' performance. I would suggest considering either another metric or another problem.\\n2. I also would suggest to include the description (at least the formula for the density) of the problem \\\"molecular configurations.\\\" It would provide the reader with an additional intuition on its difficulty.\\n3. I think section 4 would benefit from the clear description of the choice of s, for instance, from the description of the variable mu, which appears there for the first time.\", \"additional_comments\": \"After rereading the review, I feel that it may sound a bit harsh for the authors. Therefore, I want to say aloud that I find the paper's subject to be of great interest, consider any work in this direction valuable, and encourage the authors to continue their studies. My criticism is only an attempt to approach the review process objectively.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Review 2\", \"review\": \"Summary:\\nThe paper introduces a gradient-based approach for tuning the step-size and the diagonal mass matrix of HMC together with the parameters of an initial distribution for the Markov chain. They suggest different objectives amenable for SGD: maximize the expected target log-density of the final state of the chain, but also an objective to ensure a somewhat \\u2018wide\\u2019 initial distribution. The approach is illustrated on 2-d toy models, deep latent Gaussian models on (Fashion) MNIST and molecular configurations.\", \"positives\": \"The submission suggests a practical approach for tuning HMC that remains a challenging problem. The combination of the different objectives is new as far as I am aware. Empirical experiments are provided to justify the approach on standard benchmark problems, where it is seems to be competitive with state of the art methods, and a more extensive study on sampling molecular configurations.\", \"negatives\": \"I feel that further arguments are needed to justify why the entropy of the proposed state can be ignored when adapting the hyperparameters of the sampler. The paper argues that \\u201cSince HMC, by construction, cannot collapse to such a point mass, we argue that the entropy term can be dropped provided the initial distribution of the chain has enough coverage of the target\\u201d. I am not convinced by this: take a standard normal target, then a leapfrog-integrator with 2 steps, unit mass matrix and step size of sqrt(2) proposes deterministically from a point mass distribution and this happens everywhere on the state space. While this might be an unrealistic example, it is not clear to me how such situations can be avoided in general.\\nIt is also not clear to me why the Sliced Kernelized Stein Discrepancy objective automatically adjusts the width of the initial distribution. In equation (4) the discrepancy is between the final state and the target and I fail to see how this relates to the width of the initial density.\", \"recommendations\": \"I vote for a weak reject at the moment. The ideas proposed in the paper are indeed interesting. However, I am not yet convinced that the objectives yield HMC kernels that explore the state space well (so the HMC proposal does not become close to deterministic/completes a U-turn so that entropy comes largely from the initial distribution which is however trained with a different objective). Also the use of the Sliced Kernelized Stein Discrepancy specifically should be better motivated. I am happy to increase my score if the authors better clarify these points.\\n\\nFurther comments/issues:\\nThe authors claim in the abstract that existing approaches \\u201coptimize a tractable lower bound that is too loose to be useful in practice\\u201d. Can this be backed up more concretely? I understand that such methods (such as Thin et al., 2020) use a looser bound, but not that these types of bounds are useless in practice.\\nIn section 3.1, how do the acceptance rates compare for the narrow vs the wide initial distribution? My intuition would be that the acceptance rates for the narrow one are smaller than for the wide one. Would it then be possible to get a better exploration even in this case by including an objective to target an acceptance rate (say increase the stepsize if the acceptance rate is above 0.65)?\", \"minor_comments\": \"Is it obvious that equation (6) minimizes the 1-divergence? For k=1, is this not the standard VAE/0-divergence, while for k>1 the IWAE objective can be seen as a 0-divergence on an extended space?\\nWhat are the \\\\gamma variables simulated from N(0,I) exactly? Are they really the momentum variables? Are the initial momentum variables not from N(0,diag(m))?\\nIn the experiments from Section 5.1, why do you target a minimum acceptance rate of 0.25 and not an average rate of 0.65, which seems a more common choice in the adaptive MCMC literature?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
dKwmCtp6YI | Representation and Bias in Multilingual NLP: Insights from Controlled Experiments on Conditional Language Modeling | [
"Ada Wan"
] | Inspired by the phenomenon of performance disparity between languages in machine translation, we investigate whether and to what extent languages are equally hard to "conditional-language-model". Our goal is to improve our understanding and expectation of the relationship between language, data representation, size, and performance. We study one-to-one, bilingual conditional language modeling through a series of systematically controlled experiments with the Transformer and the 6 languages from the United Nations Parallel Corpus. We examine character, byte, and word models in 30 language directions and 5 data sizes, and observe indications suggesting a script bias on the character level, a length bias on the byte level, and a word bias that gives rise to a hierarchy in performance across languages. We also identify two types of sample-wise non-monotonicity --- while word-based representations are prone to exhibit Double Descent, length can induce unstable performance across the size range studied in a novel meta phenomenon which we term "erraticity". By eliminating statistically significant performance disparity on the character and byte levels by normalizing length and vocabulary in the data, we show that, in the context of computing with the Transformer, there is no complexity intrinsic to languages other than that related to their statistical attributes and that performance disparity is not a necessary condition but a byproduct of word segmentation. Our application of statistical comparisons as a fairness measure also serves as a novel rigorous method for the intrinsic evaluation of languages, resolving a decades-long debate on language complexity. While these quantitative biases leading to disparity are mitigable through a shallower network, we find room for a human bias to be reflected upon. We hope our work helps open up new directions in the area of language and computing that would be fairer and more flexible and foster a new transdisciplinary perspective for DL-inspired scientific progress. | [
"multilinguality",
"science for NLP",
"fundamental science in the era of AI/DL",
"representation learning for language",
"conditional language modeling",
"Transformer",
"Double Descent",
"non-monotonicity",
"fairness",
"meta evaluation",
"visualization or interpretation of learned representations"
] | Reject | https://openreview.net/pdf?id=dKwmCtp6YI | https://openreview.net/forum?id=dKwmCtp6YI | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Epu0jj3HbI",
"cZ3ukDyTMP4",
"rrn966oiTjk",
"q9moLR4DMkF",
"F_R_iujmKO",
"oo-G9pwX4BG",
"5vVEqh3f4Vd",
"MhuxIhbU8NB",
"hongXQyc_9s",
"76s5dfLC6sM",
"1DBqOoGCZI",
"MNJQVbu9lz0",
"51q7nkcW7yZ",
"Dnd42UqhNsn",
"YTcewTWPnCE",
"nfIPmG5ej1-",
"F8Vl64Npsnf",
"vyb_ir4qYaF",
"zh4Xv5Ynj_D",
"XFM-CP46DgZ",
"6vp_hHxTzOu",
"ZTDrA_qwKLm"
],
"note_type": [
"comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1714335656572,
1611359994392,
1610040352060,
1606248341709,
1606243804016,
1605754795573,
1605726577441,
1605668775142,
1605665845225,
1605236450867,
1605236424416,
1605151024484,
1605149821350,
1605148877531,
1605148829774,
1605122320233,
1605075939785,
1605038594648,
1604387005825,
1603925006656,
1603898564010,
1603368893923
],
"note_signatures": [
[
"~Ada_Wan1"
],
[
"ICLR.cc/2021/Conference/Paper3761/Authors"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3761/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3761/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3761/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3761/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3761/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3761/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3761/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3761/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3761/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3761/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3761/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3761/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3761/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3761/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3761/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3761/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3761/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3761/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3761/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Note re subsequent publication\", \"comment\": \"Part of this work, \\\"Representation and Bias in Multilingual NLP: Insights from Controlled Experiments on Conditional Language Modeling\\\", was re-formulated as \\\"Fairness in Representation for Multilingual NLP: Insights from Controlled Experiments on Conditional Language Modeling\\\" and published at ICLR 2022, see https://openreview.net/forum?id=-llS6TiOew.\"}",
"{\"title\": \"Response and announcement\", \"comment\": \"We appreciate all comments received thus far and will be uploading a revision.\\n\\nLinguistic complexity (as we conventionally know it) or morphological/morphosyntactic complexity has been decomposed into statistical criteria in length and vocabulary. The argument is sound. In the context of computation, there is **no such complexity necessary** in Transformer CLMs. We will provide a more analytical solution to this in separate work. \\n\\n(In the case of deeper models, where disparity could arise, one could also understand it as the Transformer being able to solve/learn the ...V+C\\\\*V+C\\\\*V+... pattern (C for consonant and V for vowel), may the script be Latin, Cyrillic, or Abjad. The issue with the ZH logography was remedied using byte representation. Other linguistic complexities are not relevant in the models as demonstrated, unless such complexities are being modeled via word tokenization.)\\n\\nAs noted in the title, the paper is about _insights_ from conditional language modeling experiments. \\n\\nThe findings are clearly enumerated in the summary of findings in \\u00a71.1, and summarized again in the conclusion. \\n\\nThe basic take-home message to the question \\\"are languages equally hard to conditional language model\\\" is yes-and-no. The paper walks one through why. \\n\\nThat said, there are some \\\"more profound meta interpretations\\\" possible. So the bigger \\\"take-home messages\\\" can be different depending on where one's \\\"home\\\" is. For example, \\n\\n- a seasoned domain specialist (a linguist) may find comfort in seeing that neural networks could unite the two disparate schools of thought in linguistics --- general linguistics and comparative linguistics, whose underlying assumptions are, respectively, that languages are fundamentally similar and that languages are fundamentally different. \\n- Linguists and/or computational linguists may be inspired to reflect on how our current operation and common conception of language is only word-bound. The field has not looked at language beyond the word-based/grammarian interpretation. This paper offers a new perspective, an opportunity for a new science, one of finer granularity and a more stable fundament, with characters and bytes as units. There is more to the structure of language beyond morphology, syntax, etc.. \\n- Algorithm-focused practitioners / most computer scientists could see that the role data plays in Double Descent, and how a holistic evaluation and consideration with the nature of data can be beneficial. \\n- Someone who is new to NLP or someone who is interested in knowing how CLMing can be made fairer across different languages may be glad to find out data representation/preprocessing matters. \\n- To those who see neural networks (more specifically, seq2seq models) as black boxes, we show that they are neither black nor boxes. They might have been quipped black because no one has done these very basic control experiments or looked at the data statistics. They are not boxes but more like _lenses_. \\n\\nSo in this respect, it is ok for different people to not have the same take-home message from this paper. \\n\\nWe hope our writing has offered those who already understood company, and helped those who don\\u2019t understand. If any reader should have any questions or comments on how they'd like a particular topic to be addressed more detailedly, or would like to be notified when the next revision becomes available, please email us or leave us a comment below. Thank you.\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper study to what extent languages are hard to model by a conditional language-model based on information-theoretic measurements.\\n\\nOverall, the reviewers value the systematic and extensive controlled experiments present in the paper. However, the presentation of the paper makes it very hard to follow and reviewers all still complain that it is hard to understand the take-home message of the paper. \\n\\nDespite the reviewers also appreciate the authors' effort in improving the paper, submitting the revision, responding to the feedback, they still conclude that significant reorganizing and revising of the paper is needed before it can be published. \\n\\nIn particular, the paper may be able to improve by backing up the empirical study with some linguistic phenomena or by a more careful rewriting in explaining and discussing the empirical results. \\n\\nSome other strong arguments such as \\\"Our application of statistical comparisons as a fairness measure also serves as a novel rigorous method for the intrinsic evaluation of languages, resolving a decades-long debate on language complexity.\\\" may need to be carefully revised. In this particular example, it is unclear how this paper \\\"resolve\\\" the debate on language complexity by demonstrating a few experiments. Several sentences like this one should be revised.\"}",
"{\"title\": \"Response from 24Nov2020 from Reviewer4\", \"comment\": \"Thanks for your feedback.\", \"re_1\": \"we will adjust the graphs.\", \"re_2\": \"we had in a previous non-published draft the sentence \\\"Other related work and more recent development on DD can also be found in their (Chen et al. 2020 [1]) work\\\" as the last sentence of the first paragraph under Sample-wise DD in \\u00a75. We had taken that out due to space. We can fit the sentence back in, as they provide a rather succinct summary in their App. A and their version has just been revised yesterday. Would this proposal of ours suffice (if not, would you please specify what else on DD you would like to see addressed)? Only sample-wise DD is relevant for our paper and nobody, to the best of our knowledge, has examined/analyzed this from the data perspective.\", \"re_3\": \"yes, in version 1.0, App. O is the last appendix. We took App. P (extended version of related work) from v0.1 out (the one imported on 10Nov2020 [2]) out because we did not want to overwhelm reviewers and readers. The related work section in v.1.0 on contains a more succinct summary of the most relevant related work.\\n\\nThe sentence in the now-obsolete App. P on related application references that we were referring to in our previous reply was: \\n\\\"Although end-to-end, sequence-to-sequence application papers leveraging full subword representations have been plenty \\u2014 with characters (e.g Lee et al. (2017)), bytes (Gillick et al., 2016; Costa-juss\\u00e0 et al., 2017; Li et al., 2019a), and BPEs (most NMT papers from 2016 on), there has been no systematic or statistical evaluation of these representations with an explicit effort to relate these to the statistical profiles (not traditional word-based linguistic categories) of diverse languages and to complement the statistical profiles with equitable measures in representation for fairness.\\\"\\n\\nIn v1.0 and up, it has been paraphrased and put into \\u00a73 3rd paragraph (at the bottom of p. 4): \\\"A practical takeaway from this set of experiments: in order to obtain more robust training results, use bytes for ZH (as suggested in Li et al. (2019a)) and characters for AR and RU (e.g. Lee et al. (2017))...\\\". \\n\\nAlso, we were about to re-inquire with you about your expectation of downstream task results because you had asked about these. We already clarified in our initial reply that this is not an application or NMT paper (as stated explicitly in the paper itself). \\nOne of our main objectives is to get people to examine their models and foundational assumptions carried over from traditional science(s) more critically and to look at data statistics more closely, instead of just focusing on quantitative results or calling neural networks a black box. And we'd appreciate it if you would please let us know if your expectation of downstream results is still relevant or if it's been updated given our elaboration on the main objective of this paper. One of our keywords is \\\"science for NLP\\\" --- we wanted to advocate a *science* for NLP for understanding, as NLP has been primarily *engineering*-focused. \\n\\nWe'd appreciate your confirmation. Thank you again. \\n\\n[1] https://arxiv.org/pdf/2008.01036.pdf (App. A)\\n[2] https://openreview.net/references/pdf?id=55tHd7KuIU\"}",
"{\"title\": \"Response\", \"comment\": \"1. The grey background with the lighter colors is difficult to read: I'd recommend (1) more contrast (Eg. white and darker colors) (2) larger axes and thicker lines. I appreciate that you have used line styles and color to help readability\\n2. Reg. i. An more detailed description of Double Descent would be useful, even if in the Appendix\\nii.\\n3. I'm not sure which section you're referring to: in the current version the App O is the last section. Are you referring to the changes in related work?\"}",
"{\"title\": \"Meaning vs. information (cont'd)\", \"comment\": \"Dear Reviewer 1:\\n\\nWhen we first replied to your review, we did so assuming you meant \\\"meaning\\\", not \\\"information\\\", hence there is a need to adjust our answer: \\nre Wubi \\\"we do not dispute that one stroke can be meaningless, but a unit made up of several strokes can be meaningful\\\": we think that, for information, every stroke counts. So even when a Wubi segment does not seem meaningful, it can carry / carries information content. \\n\\nWe have uploaded v1.0 and would be pleased if you wouldn't mind letting us know whether you're satisfied with it. \\n\\nThank you and best regards, \\nauthors\"}",
"{\"title\": \"v1.0.1 uploaded\", \"comment\": \"Changes:\\n\\n- fixed minor typos/formulation in main paper. \\n- \\u00a76 (Related Work) re Bender (2009) \\\"language-universal\\\" --> \\\"language independent\\\"\\n- edited language in App. O (extended version \\u00a74) and App. I (language complexity). \\n(For App. I: \\\"no valid or universal\\\" --> \\\"no universally valid\\\". The latter is a more conservative formulation, also supported more concretely in Haspelmath (2011). The original formulation was meant to be understood with the premise \\\"in the context of computing\\\"). \\n\\nVersion number is indicated at the end of the document.\"}",
"{\"title\": \"v1.0 and your review\", \"comment\": \"Dear Reviewer 3:\\n\\nv1.0 has just been uploaded. \\n\\n- We believe our formulations are now \\\"cleaner\\\", and we clarified and discussed bias more explicitly in this version. \\n- Item 3 in \\u00a71.1 (previously \\u00a71.2) has also been revised. \\n- Unfortunately, we did not have enough space for the additional references you recommended as it seemed like we might have to go off course a bit to talk about the explicit modeling of linguistic concepts and the evaluation thereof (vs the evaluation of a more implicit modeling through LMs/CLMs). There is a difference between these 2 objectives (which could make for a separate paper on its own). But we did consider, only to realize that we might end up confusing readers if we provide too much information. However, if you should think that including those references would be important and relevant for our objective, or if there is anything that you think we could improve on, please do not hesitate to let us know. \\n\\nWe look forward to hearing from you. \\n\\nThank you and best regards, \\n\\nauthors\"}",
"{\"title\": \"v1.0 uploaded\", \"comment\": \"Dear Reviewers:\\n\\nWe have uploaded a new version of the paper (v1.0), in which we hope to have addressed your actionable feedback. \\nWe'd appreciate it if you would please take a look and give us your comments. Please let me know if you have any questions or if there is anything that you'd like to discuss. \\n\\nIf you are satisfied with it, please do consider arguing for our acceptance. \\n\\nThank you and best regards, \\nAuthors of \\\"R&B in Multilingual NLP\\\"\"}",
"{\"title\": \"first response to your review (2 of 2)\", \"comment\": \"(cont'd)\\n[1] https://2020.emnlp.org/blog/2020-05-17-write-good-reviews\\n\\n[2] In our case: how can we help improve our understanding of the relationship between language, data representation, size, and performance in CLMing with the Transformer? As this can be a big question, we formulated the problem statement for this paper as \\\"are all languages equally hard to CLM?\\\". \\n\\n[3] https://en.wikipedia.org/wiki/Pasteur%27s_quadrant\\n\\n[4] \\\"logographic languages\\\" = \\\"languages with logographic scripts\\\"\\n\\n[5] Is it Harder to Parse Chinese, or the Chinese Treebank? Roger Levy, Christopher D. Manning. ACL 2003. https://www.aclweb.org/anthology/P03-1056.pdf\\n\\n[6] An Empirical Examination of Challenges in Chinese Parsing. Jonathan K. Kummerfeld, Daniel Tse, James R. Curran, Dan Klein. ACL 2013. https://www.aclweb.org/anthology/P13-2018.pdf\\n\\n[7] We adopted a novel relational evaluation method. \\n\\n[8] Cf. a related view from Rich Sutton, when asked about incorporating domain knowledge as soft priors in the Q&A after his talk at the ICML 2020 4th Lifelong Learning Workshop in video at 8:59:30-9:00:38 at https://icml.cc/virtual/2020/workshop/5735#collapse7535. \\nOur view is different from his, but we share his objective in contributing to a science of machine intelligence. That said, there is a caveat here. There are still scripts out in the world from less documented languages which have not been added to Unicode. So we do see the opportunity to optimize with some fair measures (as suggested in the paper: complementing character encoding with language statistical profiles), as that could also benefit non-neural algorithms as well as basic text processing. (This is a negligible concern for all of us here who are privileged enough to partake in an ML conference, but that may not be the case for many parts of the world where, e.g. one may have to struggle to get internet or may only have access to (older) digital devices.)\\n\\n[9] Although we \\\"only\\\" studied 6 languages in the paper, our data cover a broader range in diversity than that studied by [P19]. ZH and AR/RU are at the opposite ends of the spectrum* --- both on \\\"word\\\" level, in terms of traditional morphological analyses, and on the character level, with logographic languages being the outliers of the world's languages. \\n(* We agree that some agglutinative languages could be longer than RU/AR, hence at the further end of the spectrum beyond RU, i.e. with longer length or \\\"morphological complexity\\\". But there are no multi-way parallel data (multitexts) of larger sizes that would support our study.)\"}",
"{\"title\": \"first response to your review (1 of 2)\", \"comment\": \"Thank you, Reviewer 2, for your appreciation of our efforts.\\n\\n1. Re the language: thank you for your feedback. Part of our \\\"language\\\"/formulation is unintentional and we are in the process of reformulating some long sentences (e.g. in the abstract and in 1.1). But part of it has to do with the fact that we need(ed?) to be very careful with our wording, as though we are writing legalese (e.g. item 3 in Sec 1.2). Our findings can be viewed as in conflict of interests with linguistic typology (though that is not necessarily the case), which has become a dominant trend in multilingual NLP. In such context, we would be voicing a minority view that can be interpreted as \\\"irreverent\\\" by some, causing the paper to run into either disregard or potential rejection by those who reviewed out of self-interests instead of the interest of the community as a whole. \\n(But we firmly believe in our cause to bring some fairness, diversity and inclusion into our multilingual practice. Some things need be said (e.g. Sec 1.2) and we are \\\"singing our blues\\\" --- providing representation --- for the \\\"morphologically rich\\\" as well as \\\"morphologically poor\\\".) \\n\\nWhat formulation would you suggest for our \\\"quasi-legalese\\\" for item 3 in Sec 1.2?\", \"re_the_first_paragraph_of_the_paper\": \"it is written to clarify that this is a bridge paper and to \\\"defend\\\". Some NLP practitioners have a habit to validate claims with downstream task results and with usefulness. And we are trying to move our community members away from that tradition, or at least create a space with a more scientific (as opposed to engineering) mindset. Scientific insights and contributions do not require topping the leaderboard or using SOTA techniques (or even being \\\"useful\\\"). A scientific contribution is to improve our understanding of a phenomenon in the world [1][2]. We take on this task, however, as use-inspired basic research, i.e. research in the Pasteur's quadrant [3] to bridge the gap between basic and applied research, as a quest for fundamental understanding with some consideration of use. (So if one gets the impression that this work is not really in the traditional language sciences or linguistics space, but a bit different from customary NLP work (except maybe as a blackbox paper?), yet less theoretical than a typical ML paper (perhaps like data science?), then they got it. That is exactly the space that we are targeting and DL/NNs help enable the bridging of all of these areas.)\\n\\nAs we are trying to break new ground here, what would you do re the first paragraph if you were us? \\n\\n2. Re tone being overly pedagogical: we were originally expecting a more diverse, interdisciplinary mix of readers/reviewers. We would like to improve on this tone issue if it bothers you. Would you please provide us with some (or at least one) example(s) and suggest an alternate formulation? Thank you. \\n\\n3. Re Ponti et al. (2019) (hereafter [P19]): thank you for the reference. We will definitely mention this in related work. We were/are a bit hesitant when it comes to comparing with work that do not include ZH (or any logographic languages [4]). The authors of [P19] explicitly \\\"exclude languages that are not written in the Latin script\\\" even though ZH is available from the dataset they used (the Bible corpus from Christodouloupoulos and Steedman (2015)). And aware as we are of the qualitative biases (against ZH) in linguistics, we are more interested in methods that are fairer and more general. (ZH is a language for which not only is the notion of word a contested subject, as mentioned in the paper, the notion of parts of speech like nouns and verbs is also highly questionable [5,6]. Some word-based fundamental issues have been ignored/overlooked. And the only methods so far that put ZH on an equal footing with the other languages seem to be i. transliteration (but it is at the expense of a more native logographic representation), ii. byte-based processing, and iii. hyperparameter tuning.) \\nThough saying all this or comparing with [P19] would seem like going off on a tangent a bit --- in the first place, we do not intend for this paper to be an application paper, nor do we want to make it like a paper comparing absolute scores [7] of systems with and without domain knowledge [8] for a narrower range of languages [9]. We would, instead, really like to set precedence in working in a new transdisciplinary space where we can relate concepts in an unsupervised or less supervised way, e.g. using Double Descent to do \\\"new\\\" science, making discoveries like erraticity. \\n\\n4. Again, we appreciate your support. What can we do to get you to improve your score? We are in the process of revising. If there are any expressions that you find hard to read, please do not hesitate to let us know. Thanks.\"}",
"{\"title\": \"Meaning vs. information\", \"comment\": \"If by \\\"meaning\\\" you meant \\\"information\\\", then that should indeed, if we understand your formulation correctly, be more fitting.\"}",
"{\"title\": \"A clarification question\", \"comment\": [\"When I said \\\"meaning\\\", I meant any information that is carried and conveyed through the different representations. This is probably a loose definition, but do you think this is something you are trying to address?\"]}",
"{\"title\": \"first response to your review (2 of 2)\", \"comment\": \"(cont'd)\\n6. Re statistical properties concerning sequence length and vocabulary being the results from linguistic typology information: no, these properties do not result from linguistic typology. They are from the language data themselves (see App. D). And linguistic typology itself is also a result of language data, though with word bias from our academic tradition. \\n\\nAs to whether it is possible for there to be correlations between statistical properties and symbolic linguistic typological concepts on the \\\"word\\\" level, our paper does not comment on that at all. That is beyond the scope of the present paper. \\nWe are pleased to have been able to sort out when linguistic concepts do apply and to have reached the conclusion we did in point 3 in Sec 1.2. That is sufficient to us and for this paper. \\n\\n7. Re the additional references: thank you and we will add these to our related work in either the main text or App. P, as space allows. \\n\\nWe will be uploading a revision in the coming days. \\nIf you have any further questions or feedback, or input as to whether we should move certain information from the main paper to the appendices (or vice versa), please do not hesitate to let us know. \\n\\n[1] https://github.com/oxford-cs-deepnlp-2017/lectures/blob/master/Lecture%207%20-%20Conditional%20Language%20Modeling.pdf\\n[2] A summary can also be found on p. 1 of Jamie Ryan Kiros' PhD thesis on Conditional Neural Language Models for Multimodal Learning and Natural Language Understanding (2018). \\n[3] Multimodal Neural Lnaguage Models. Kiros et al. ICML 2014. http://proceedings.mlr.press/v32/kiros14.pdf\"}",
"{\"title\": \"first response to your review (1 of 2)\", \"comment\": \"Thank you, Reviewer 3, for your review. We would love to have you swing towards (strongly) accepting and supporting our paper.\", \"to_your_concerns\": \"1. Re the term \\\"CLM\\\" being misleading: we adopt a definition of CLM that is rather standard in the DL tradition. For example, according to [1,2], CLM is to be differentiated from an unconditional LM in that unconditional LMing is the modeling of the probability of the next token, given the history of the preceding tokens, while CLMing is the modeling of the probability of the next token, given the history of the preceding tokens *and* conditioning context. In our case, such conditioning context is a line from the source language. But in, e.g. the case of [3], this conditioning context can be in other modalities. We used a standard neural sequence-to-sequence modeling setup, as you said, like for NMT. But different from NMT, we are not concerned with translation output as we would like to eliminate the confound in search for generation. We focus on an intrinsic evaluation in perplexity/cross-entropy of our CLMs. \\n\\n2. Re \\\"I'm also not sure if comparing perplexity by conditioning on another different language (source language) is correct\\\": we are not sure why it would be correct or incorrect. We would like to do an intrinsic evaluation of CLMing, like in an NMT setup --- that *is* the definition of our task, of our investigation. Re \\\"[t]he experiments would be clearer if done with standard LM with Transformers encoder model like BERT for example\\\": we disagree. Our setup enables a better understanding of the encoder-decoder model as we are able to perform more systematic controls on data size, representation, and language on both source and target sides. \\n\\n3. Re \\\"generalizations\\\": by that we mean the findings in Sec 1.2., e.g. representation relativity, source language neutralization. (To that we will also add, to the next revision, one obvious observation that we had mentioned in the main text but did not list in Sec 1.2: \\\"[r]epresentational units of finer granularity can help close the gap in performance disparity\\\".) \\n\\n4. Re script bias being too strong of a claim: you are right, script bias is in the data. We should have made this more obvious.\", \"we_stated_already\": \"i. in Sec 1.2 \\\"[b]igger/overparameterized models can exacerbate the effect of data statistics. Biases that can be expressed quantitatively and lead to disparity are mitigable through hyperparameter tuning\\\"; \\nii. then we alluded to this again in Sec 7 (conclusion) \\\"[i]t will take everyone\\u2019s effort to mitigate the bias in ourselves...\\\"; and \\niii. we also provided an analysis of the data statistics in App. O. \\n\\nBut we see that we should clarify \\\"bias\\\" more explicitly instead in the main text. (Some of our previewers did not feel comfortable with the concept of \\\"human bias\\\", but, we have already come this far... we will revise accordingly.) \\n\\nThe bias that we are most interested in addressing in this paper (as apparent in the one-sentence summary for our paper) is word bias, which is a human bias. It is up to us to see and process languages with or without the concept of a \\\"word\\\", a concept that had given rise to a hierarchy that is not necessary. That is the primary intended reading of the \\\"bias\\\" in our title \\\"Representation and Bias\\\". \\n\\n(A second, more general, possible interpretation of \\\"bias\\\" is that we have a choice in adopting whichever perspective we wish to see languages in --- whether they are fundamentally different (main experiments results) or similar (when we \\\"zoom out\\\" and see no differences, as in results from App. M). It can all be just a matter of perspectives.)\\n\\n5. Despite how things looked still a bit non-monotonic in Fig. 3b, disparity was eliminated. In larger/overparameterized settings for our main experiment, we are already stretching it with the length --- when we do not further tune our hyperparameters. As can be seen in App. L2, even 300 bytes for this setting would cause erraticity. But as mentioned in the last paragraph in Sec 5 and can also be seen in App. M, erraticity can be resolved and a monotonic development is possible when we adjust our hyperparameter setting, e.g. by decreasing the depth of the models to one layer.\"}",
"{\"title\": \"first response to your review\", \"comment\": \"Thank you, Reviewer 1, for your review.\\n\\nWe did not mention \\\"meaning\\\" or \\\"meaning representations\\\" in our paper, nor was it our intent to model meaning explicitly. We investigate the relation between language, data representation, size, and performance in the context of Transformer CLMs through the research question of whether languages are equally hard to CLM. One could think of our effort as estimating conditional probabilities with the Transformer. \\n\\nIn fact, when we filtered our data for the 6 languages, we did so in lockstep/parallel (hence it is a fair comparison) [1]. Semantic content has been held constant. Meaning is an independent variable of our experiments. Re \\\"300 characters in Chinese carry much more information than 300 characters in English\\\": we understand, which is why we see the disparity on the character level between ZH and the other languages. A ZH character carries more information than an EN character does, hence sequence length in ZH is generally shorter than that in EN. Important for our comparisons is that we are evaluating parallel sets of lines.\", \"to_your_concerns_re_the_additional_experiments_with_zh\": \"Re Wubi --- \\\"[t]he segmentation may not be correlated with the meaning of the word at all, as claimed by the papers cited by the authors\\\": we do not dispute that one stroke can be meaningless, but a unit made up of several strokes can be meaningful. One might argue that there is a limit to how much semantic information one can obtain through alignment [2] of sub-character (or subword) units, because not all sub-character/subword units (in any language) are meaningful units. We agree, there is a limit to any kind of morphological analyses or to a mapping between form and meaning. Yet it can provide an additional source of information and can be a good pedagogical exercise for students.\", \"re_pinyin\": \"the implementation of Pinyin we used is with lexical tones (see App. B). Re the ambiguity in Pinyin: yes, we are aware of the limits of symbolic approaches for ZH being a real-world problem. Even if lexical tones were given, the lack of account of tone sandhi (tone changes in sequences) can pretty much only be effectively overcome by continuous representations at scale (by processing raw audio data when modeling sounds). This is also one reason why we try to advocate, in general, a more statistical approach to handling multilingual data, as opposed to relying on traditional linguistic typological resources.\\n\\nWe recognize that, despite how ZH may be viewed as a high-resource language, it lacks a more native/flexible account in traditional language science. And we are working towards a more diverse and inclusive science, with this work being part of such initiative. We appreciate your support. \\n\\nIf there is any part of our paper that still seems unclear to you or that you could help us improve, we'd appreciate it if you could be more specific in pointing out the relevant sections/sentences. Thank you. \\n\\n[1] When we filtered our data for length, we ensured that it remained fully parallel across all 6 languages. Because of this, every ZH line in characters has a translation in each of the other 5 languages whose line length does not exceed 300 characters. \\n\\n[2] Alignment can be useful for good translation and essential for the automatic compilation of lexicographic resources --- with neural and non-neural algorithms. For finer-grained alignment of a sub-character component in a logographic language and sub-\\\"word\\\" component (i.e. character or sub-character strings, depending on the script) in a non-logographic language, we cannot do so without breaking away from the traditional/convention notion of a \\\"word\\\" that tends to be centered on EN or a notion that relies on whitespace tokenization only. Examples: 1. in aligning the \\u9d5d in \\u4f01\\u9d5d meaning 'penguin' in ZH (literal: 'stand'+'goose') with the gans 'goose' in vetgans (the Dutch word for 'penguin', lit. 'fat'+'goose'); 2. in aligning the \\u0435\\u0432\\u0430 part in \\u043a\\u043e\\u0440\\u043e\\u043b\\u0435\\u0432\\u0430 (RU for 'queen') with the \\u5973 (designating 'female') in \\u5973\\u738b (\\u738b means 'king'); 3. in extracting the \\u9ce5 'bird' part from \\u9d5d 'goose'.\"}",
"{\"title\": \"some quick answers for now, to be followed up with more comprehensive ones (depending on your responses) and/or revisions\", \"comment\": \"Thank you for your review.\\n\\n1. Re diagrams: do you find the diagrams in App. F still too small or difficult to read? As stated in the 1st paragraph of Section 3 (v0.1): \\\"[w]hat should be considered relevant results for our investigation is the number of language pairs with significant differences reported in Table 1, the general patterns of (non-)monotonicity and disparity in the figures, and the corresponding analyses\\\", one does not need not be concerned with the absolute scores of the experiments directly. The results are summarized in the disparity tables (Table 1 (and Table 5 in App. M)). That said, we can further enlarge the diagrams upon your feedback. \\n\\n2. Re \\\"a clearer focus rather than the broad range of topics covered here\\\": this paper can be viewed as a \\\"bridge paper\\\", connecting ideas in disparate fields [1]. Namely, we would like to not only \\\"bridge an understanding between language science and engineering\\\" (Sec. 2), but also connect concepts in language or language data with those in DL/NNs (Sec 1, and also in our keywords \\\"science for NLP\\\", \\\"fundamental science in the era of AI/DL\\\"). This latter keyword was a workshop title at ICLR 2020, though language science was not one of sciences being discussed. Just as phonologists have described and drawn parallels between the interaction of symbolic representations of sounds and cellular processes in biology, we think that there are plenty of opportunities to further language science and a statistical science for NLP with DL/NNs. \\n\\n2i. Re \\\"[i]t is difficult to understand what the methods/terms (the information theoretic measure used, double descent) are - little time is spent explaining these\\\": the information-theoretic measure used is cross entropy (Sec 2.1 and App. B). We will add a brief description of DD in the main text. Sample-wise double descent (coined by [2]) is, in short, when performance gets worse with increasing data size and then gets better. Double Descent (DD) has been a rather popular topic this past year at ML conferences, including many submissions for ICLR 2021 as well. Most work, however, concentrate on the theoretical aspects, but there is another parallel submission advocating that the emergence of DD \\\"is due to the interaction between the properties of the data and the inductive biases of learning algorithms\\\" [3]. This seems to be a timely corroboration of our findings. \\n\\n2ii. DD is relevant because: \\na. our (more general) goal is to improve our understanding and expectation of the relationship between language, data representation, size, and performance (abstract); and \\nb. we also made the connection between words and ZH_trg in characters, both as \\\"non-atomic\\\" units that can be further decomposed (Sec. 5, DD) and as representations that are prone to exhibit DD. This can affirm the interpretation of wordhood for those ZH speakers who identify ZH characters as words (Sec. 3 last paragraph and App. I). \\n\\n(Similarly, erraticity is relevant due to [a] above and also because it affects performance disparity --- if a language has erratic performance and another doesn't, the difference is likely to show as statistically significant.)\\n\\n2iii. Re \\\"[s]everal portions of text are repeated - with some editing, space can be made to discuss concepts important to the paper\\\": the initial version was written rather \\\"defensively\\\". There are many novelties (concepts, styles, approaches) being introduced in this paper, hence we repeated some things that seemed newer or more important. What are some of the repeated items that bothered you? \\n\\n3. Re related work: do you find the related application references cited in our App. P sufficient? If not, what do you think is missing? (We are not sure if your comment was reflective of your including/excluding the information from App. P.)\", \"re_downstream_task_results\": \"as stated in the beginning of the paper, this is not an application paper. There are quite a few substantial scientific contributions made in this paper already. We would like to first focus on building bridges with this paper, sorting out representations and what holds when.\\n\\n4. Last but not least, we would like to understand the reasons behind your score assignment. If there is anything else that we could do to help us win your support of our work, please do point that out to us. Thank you. \\n\\n[1] from https://nips.cc/Conferences/2020/PaperInformation/ReviewerGuidelines (under Review content 1)\\n[2] Nakkiran et al., ICLR 2020: https://openreview.net/forum?id=B1g5sA4twr\\n[3] https://openreview.net/pdf?id=nQxCYIFk7Rz\"}",
"{\"title\": \"v0.1 uploaded: combined main paper and appendices (from the original submission, not yet revised)\", \"comment\": \"Dear Reviewers:\\n\\nThank you for your reviews. \\n\\nWe have uploaded the originally submitted supplementary material (the Appendix section) together with the original main paper as a combined PDF (for future reference, we will refer, if necessary, to this combined version as v0.1) for reading convenience. Please note that this version is not yet revised. \\nSome of the concerns raised in the original reviews were already addressed in the appendices. We kindly invite and encourage all our readers (reviewers and general readers alike) to read and review the paper in its entirety. \\n\\nWe will reply to each of your concerns as expressed in this first set of reviews in the coming day(s) and upload revisions accordingly. We are confident about the findings of this paper providing valuable and significant insights to our current practice in language science and engineering, and also DL/NN evaluation and model interpretation. We are committed to improving our formulation to a version that would satisfy our reviewers and readers as much as possible, while remaining true to scientific integrity. We thank you in advance for your evaluation and input and look forward to a fruitful discussion period. \\n\\nThank you again and best regards, \\nAuthors of \\\"R&B in Multilingual NLP\\\"\"}",
"{\"title\": \"Good premise; Unclear Paper Focus\", \"review\": [\"Summary: The authors attempt to investigate to what extent languages are hard to conditionally language-model. They do this by using some information theoretic measures. Claims:\", \"There are no statistically significant differences between source language representations, but there are significant difference between pairs of target language representations.\", \"There is no complexity that intrinsic to a language except its statistical properties concerning sequence length and vocabulary (unless word-based methods are used).\", \"They also observe phenomena such as Double Descent and erraticity.\", \"----\"], \"strengths\": [\"The Experiments are extensive.\", \"The relative similarity of source language representations is interesting and worth exploring further.\"], \"weaknesses\": [\"The diagrams are difficult to read\", \"The paper is hard to follow and would benefit from a clearer focus rather than the broad range of topics covered here. For example:\", \"It is difficult to understand what the methods/terms (the information theoretic measure used, double descent) are - little time is spent explaining these.\", \"Double descent is discussed in the paper but it is still made not clear why this is relevant in the paper.\", \"Several portions of text are repeated - with some editing, space can be made to discuss concepts important to the paper\", \"The authors make recommendations for modeling (Eg. using char level or byte level models for certain models - which have been extensively studied for this): this is not followed up with any concrete results on translation/downstream tasks or pointing out relevant work.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting but raises questions\", \"review\": \"This paper is trying to answer an important question: How does representation play a role in carrying meanings? In doing so, the authors experimented with 6 languages in 3 + 5 kinds of representations. The authors concluded that the different performances among language pairs maybe be the result of word segmentation in different ways.\\n\\nThis is an interesting step towards understanding meaning representations, especially in languages that do not have an alphabet. However, as much as I agree with some of the final conclusions, the soundness of the experiments appears to be in question.\\n\\nMy main concerns are about the additional experiments with Chinese.\\n\\nThe authors claimed that \\\"On the character level, target language ZH (ZHtrg) shows a different learning pattern throughout.\\\" There are two types of character-level representations used: Wubi and Pinyin. Wubi was originally invented for professional typesetters so that they can type fast. The segmentation may not be correlated with the meaning of the word at all, as claimed by the papers cited by the authors. Pinyin, on the other hand, is highly ambiguous. One pinyin may representation dozens of words and the authors did not take tones into considerations at all. \\n\\nThe author also mentioned that \\\"After filtering length to 300 characters maximum per line in parallel for the 6 languages, we made 3 subsets of the data with 1 million lines each\\\". Each language carries meaning differently and the information density is drastically different. 300 characters in Chinese carry much more information than 300 characters in English. This is an unfair comparison.\\n\\nIt would make a lot more sense if the authors treat each language differently because of their orthographic differences.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting paper and experiments, but lack of evidence to support its claims.\", \"review\": \"The paper investigates whether languages are equally hard to Conditional-Language-Model (CLM). To do this, the authors perform controlled experiments by modeling text from parallel data from 6 typologically diverse languages. They pair the languages and perform experiments in 30 directions with Transformers, and compare 3 different unit representations: characters, bytes, and word-level (BPE).\\n\\nI appreciate the authors' effort for their systematic controlled experiments. However, I'm leaning towards rejecting this paper since I think some of the claims made in the paper are too strong and not really backed up by their experiments.\", \"some_comments\": [\"The term \\\"Conditional-Language-Model\\\" can be misleading, since this paper model a target language conditioned on a source language, so more like in a machine translation setting rather than standard language modeling setting where you can also condition on the previous history.\", \"I'm also not sure if comparing perplexity by conditioning on another different language (source language) is correct. The experiments would be clearer if done with standard LM with Transformers encoder model like BERT for example.\", \"At the end of Section 2, the authors mention about \\\"generalizations\\\", but I couldn't really find any discussion about this in the paper. Maybe this can be clarified?\", \"I found that claiming script bias in character models is too strong, if the experiment only shows bias in ZH (and this is expected since its character-level has different notion with languages with Latin script).\", \"Byte-level: I found that there is still some \\\"erraticity\\\" in Figure 3(b) especially when the data size increases (which is more practical in real world application), so this is not entirely resolved?\", \"I also think the summary in Section 1.2 stating that linguistic typological information is not necessary given \\\"statistical properties concerning sequence length and vocabulary\\\" is not necessarily valid since these two properties are the results from linguistic typology information.\"], \"missing_references\": \"1. From characters to words to in between: Do we capture morphology? Clara Vania and Adam Lopez. ACL 2017.\\n2. Multilingual Part-of-Speech Tagging with Bidirectional Long Short-Term Memory Models and Auxiliary Loss. Barbara Plank, Anders S\\u00f8gaard and Yoav Goldberg. ACL 2016.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting and thorough exaimination of an important problem. Writing is sometimes too complicated and some more high level analysis could help.\", \"review\": \"The paper provides an empirical investigation of an important problem: the transferability of language modeling signals across languages in the transformer model. This is an important question because it can teach us both on the relations between languages and the properties of the transformer model (although it is not easy to tease the two effects apart).\\n\\nThis is a thorough paper with a large number of experiments and with interesting conclusions that nicely generalize the low level patterns observed in the experiments. These conclusions are likely to be useful for the research community as part of its on going investigation of language transfer and the transformer model.\", \"i_have_several_comments_though\": \"1. The language of the paper is often very complicated. Just as a couple of examples: It was very hard for me to follow the abstract, the first paragraph, the (very long) sentence that start with \\\"in order to eliminate\\\" (1.1), item 3 in the list of contributions and this is just a partial list. I ask that if the paper is accepted the authors will try to improve this aspect.\\n\\n2. The writing is often over pedagogical and I often got the feeling that the authors try to educate their readers (but not in the positive sense of the word). I would try to avoid this style. \\n\\n3. This work seems highly relevant to the following paper:\\n\\n\\\"Towards Zero-shot Language Modeling.\\\" Edoardo Maria Ponti, Ivan Vulic,Ryan Cotterell, Roi Reichart and Anna Korhonen . EMNLP 2019\\n\\nI think the discussion parts can gain form comparing the conclusions of the two papers, when relevant.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
ECuvULjFQia | A teacher-student framework to distill future trajectories | [
"Alexander Neitz",
"Giambattista Parascandolo",
"Bernhard Schölkopf"
] | By learning to predict trajectories of dynamical systems, model-based methods can make extensive use of all observations from past experience. However, due to partial observability, stochasticity, compounding errors, and irrelevant dynamics, training to predict observations explicitly often results in poor models. Model-free techniques try to side-step the problem by learning to predict values directly. While breaking the explicit dependency on future observations can result in strong performance, this usually comes at the cost of low sample efficiency, as the abundant information about the dynamics contained in future observations goes unused. Here we take a step back from both approaches: Instead of hand-designing how trajectories should be incorporated, a teacher network learns to interpret the trajectories and to provide target activations which guide a student model that can only observe the present. The teacher is trained with meta-gradients to maximize the student's performance on a validation set. We show that our approach performs well on tasks that are difficult for model-free and model-based methods, and we study the role of every component through ablation studies. | [
"meta-learning",
"privileged information"
] | Accept (Poster) | https://openreview.net/pdf?id=ECuvULjFQia | https://openreview.net/forum?id=ECuvULjFQia | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"DsaanhX3yRU",
"qv-SJt0j3P",
"Awnwxsr-SCy",
"ZxkHtVgPfS",
"NO6x8ylW4K",
"dqmJzXCYb9q",
"x8qNvqdUH2",
"qEA-NEIq64b",
"vI378HWJ0fQ",
"8RThwa-6YXp",
"-nyOOduxT-h",
"Gr9x8mxy_r"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040434838,
1606145472576,
1606125164311,
1605710191270,
1605710064394,
1605709903397,
1605709705735,
1605709620348,
1603893033172,
1603877188348,
1603869951287,
1603818099619
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3758/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3758/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3758/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3758/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3758/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3758/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3758/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3758/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3758/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3758/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3758/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposes a new teacher-student framework where the teacher network guides the student network in learning useful information from trajectories of a dynamical system. The proposed framework is inspired by the Knowledge Distillation method. The teacher learns what information should be used from the trajectories and distills this information for the student in the form of target activations. In a nutshell, the framework allows the student to interpolate between model-based and model-free approaches in an automated fashion. Experimental evaluation on both the hand-crafted and simulated tasks demonstrate the effectiveness of the proposed framework. The reviewers had borderline scores in their initial reviews and raised several questions for the authors. The reviewers appreciated the rebuttal, which helped in answering their key questions -- I want to thank the authors for engaging with the reviewers during the discussion phase. The reviewers have an overall positive assessment of the paper, and believe that the proposed teacher-student framework is novel and potentially useful for many real-world problems. The reviewers have provided detailed feedback in their reviews, and I would like to strongly encourage the authors to incorporate this feedback when preparing the final version of the paper.\"}",
"{\"title\": \"Thanks!\", \"comment\": [\"Dear reviewer, thank you for getting back to us. We integrated your recommendations as follows: We\", \"updated the title to \\\"A teacher-student framework to distill future trajectories\\\";\", \"removed all mentions of the word \\\"interpret\\\" and now only use \\\"extract information from trajectories\\\" and \\\"distill into target activations\\\";\", \"clarified the use of the term \\\"distill\\\" in the first page;\", \"updated every plot and mention of the algorithm, that we now call \\\"LDT\\\" for Learning to Distill Trajectories.\", \"Thanks again for helping us improve the clarity of the paper!\"]}",
"{\"title\": \"Response to authors\", \"comment\": \"Dear authors, thank you for the response; it clarified most of my questions.\\n\\nRegarding the clarity of the framing, thank you for being open to the feedback. To be very specific, I recommend that the authors:\\n\\n- make sure that all the words that they use in the title is appropriately put into context in the paper (preferably in the early part), and\\n- perhaps update the title itself to make it more informative about the work, as they are already considering.\\n\\nI do like the newly suggested titles better, because they are more specific about what the method does. For the same reason I think the version with \\\"teacher-student framework\\\" is most clear; but this may be my personal preference, so please do not feel obliged by this comment.\\n\\nWhat I would recommend more strongly, more than the specific wording of the title, is to make sure that no aspect of the title is left unclarified by the time a reader reads through the introduction. For example, the newly suggested titles all have the word \\\"distill\\\", but this word is currently not used in the paper text (not until the conclusion section), so it is context-less and vague. The way the authors illustrated the typical use of the term in their response (\\\"distilling a large model into a smaller one\\\") was great and very clear. The authors could either use the word \\\"distill\\\" in the paper like this so that its meaning in this context becomes clear, or speak in terms of other words that they actually use in the paper (e.g., there are several instances of \\\"extract\\\").\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"Thank you for your positive review! We are happy to address the concerns you raised:\\n\\n**Dimensionality of h\\\\* and h:** \\nThank you for pointing this out \\u2014 upon re-reading the submitted manuscript, we noticed that this part needs a clarification: In Section 3 we intentionally keep the specific choice of the teaching-loss-function open as an implementation detail. In principle, one could use any function that compares some transformation of the student\\u2019s activations with targets provided by the teacher. The supervision could affect some subset of the layers, etc. Exploring the space of teaching-loss functions is an interesting direction for future work.\\n\\nWe decided to focus on what seems as the most straightforward setup, which is currently described in Section 4: We start by letting the teacher output a target-activation vector for every layer of pre-activations in the student network, and to define the teaching loss as the average element-wise squared error between pre-activations and targets. However, this would mean that the teacher is forced to provide a target activation for every neuron. We found that it helps to relax this requirement by letting the teacher output both a target activation as well as a \\u201cmasking weight\\u201d $m_k$ which scales this loss component. This allows the teacher to leave certain neurons unsupervised (in an example-dependent way).\\n\\n\\n**Is there any particular reason for using classification loss in all examples especially for a typical regression problem? Did regression loss lead to bad performance?:** We decided to use a classification loss based on the results from MuZero (Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model, Schrittwieser et al.), that observed more stable results using cross-entropy instead of mean squared error for rewards and values. Therefore, we did not try using the regression loss.\\n\\n\\n**(1) As mentioned in the intro, model-based methods fail due to partial observability and etc, but I do not see the examples in this study have such issues. + (2) In the toy XOR example, the particular design of $x^\\\\*$ seems to play the key role. It is not the typical trajectory of $x$. This hand-crafted data does not help with demonstrating the teacher learning the powerful interpretation itself.**\\n We agree that the MuJoCo and Game-of-Life tasks are fully observed and therefore less of a problem for model-based methods than partially observed tasks. On the other hand, the XOR example you mentioned was precisely chosen to demonstrate a simple case where a deterministic model fails to predict the privileged data. \\n\\nIn Appendix B2.3 we detailed that making a fair comparison to the model-free, Aux, and LIT students is difficult, since the model-based student effectively uses n=16 times more computation. Note that the main objective of this paper is to compare to what extent the abstract models implicitly learned by the same student architectures but with different techniques, can learn to incorporate the trajectory information. On this basis, we did not include the model-based baselines in our comparisons.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for the positive review, we are glad you found the work to be well-motivated and interesting.\\n\\n**I suggest to add relevant citations in Sec.1.** \\n**(1) \\\"deterioration on the tasks that are potentially relevant\\\"?**\\nWe reworded the statement about the deterioration on relevant tasks as it was somewhat ambiguous.\\n**(2) Which task do model-free methods achieve substantially better performance?** \\nOne of the main benefits of model-based methods is sample-efficiency. Given sufficient data (often several orders of magnitude more), current model-based methods are sometimes outperformed by model-free methods. Examples where model-free methods achieve state-of-the-art asymptotic performance include challenging tasks such as the game StarCraft (AlphaStar) and robotic manipulation tasks such as the described in the Rubik\\u2019s cube paper by OpenAI (we included these references in the introduction). \\n\\n**Why don't you use existing methods shown in Sec.2 and compare? Where is the MB baseline?** We should clarify this aspect in the manuscript, as AnonReviewer1 shared your first question. The baselines we chose are fairly simple and established methods. The reasons for our choice were:\\ni) The approach we propose is fairly generic and orthogonal to most techniques used to regularize model-free training.\\nii) The main other method that tries to address a similar problem to ours is Value driven Hindsight Modeling (Guez et al., NeurIPS 2020): while a comparison to this would be interesting, the work is very recent, there is no publicly available source code released at the moment, and it would be hard to obtain fair and sound results when comparing to it.\\nWe focused on analysing the new method we presented empirically, we decided instead to design several ablation studies, as they provide more insight into how the method performs.\\n\\nRegarding the model-based baseline, in Appendix B2.3 we detail that making a fair comparison to the model-free, Aux, and LIT students is difficult, since the model-based student effectively uses n = 16 times more computation. Note that the main objective of this paper is to compare to what extent the abstract models implicitly learned by the same student architectures but with different techniques, can learn to incorporate the trajectory information. On this basis, we did not include the model-based baselines in our comparisons.\\nThe typo is now fixed, thanks!\\nDo you still have any concern or feedback that we could incorporate?\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for the extensive feedback. The main concerns you express are w.r.t. to clarity, and we agree that they need to be addressed.\\n\\n**Framing**: As you point out, the method itself is more general than exclusively requiring trajectories as privileged information. Upon deciding how to present the work, we were unsure whether it would be fair to claim more generality for it than applications to trajectories, given that this has been the main topic we have investigated in our experiments so far.\\nAs for the word \\\"interpret\\\", we meant it as \\\"the teacher learns to find what's relevant in the privileged information, and distills/transforms/translates it into target activations for the student to fit\\\". \\nWe understand your concern and we agree that a more suitable term should be found.\\nOverall, we are happy to adjust the title and framing of the paper. How would you recommend presenting it?\\nA few alternative titles that we have been considering (but we are happy to hear your proposals too):\\n- \\\"Learning to distill trajectories into implicit models\\\"\\n- \\\"Teaching by distilling trajectories to learn implicit models\\\"\\n- \\\"A teacher-student framework to distill trajectories into implicit models\\\"\\n\\nWe are aware that \\\"distillation\\\" is often used in the context of \\\"distilling a large model into a smaller one\\\", but combining it with \\u201ctrajectories\\u201d might prevent the ambiguity.\\n\\n**The proposed framework avoids the problems of model-based and model-free methods, but what advantages of the two does it keep?:**\", \"from_model_free_it_preserves_the_following_advantages\": [\"Its asymptotic performance is not capped by modelling errors of the environment, since there are no observations to precisely reconstruct, nor accumulating errors in longer rollouts.\", \"It can learn to model the environment \\\"internally\\\" at any spatial or temporal resolution (e.g., it could internally learn to plan asynchronously)\"], \"from_model_based\": \"- It can use the rich amount of information in future observations to bootstrap learning the task (instead of having to rely only on rewards/values). An example of this is the game-of-life dataset (Section 4.2), where model-free methods are too unconstrained and can find spurious explanations that are unrelated to the underlying mechanisms. \\nIn other words, LIT preserves the advantage of using more learning signal than model-free methods, which rely on a single scalar per example.\\n\\nOne advantage that is not preserved from model-based is the straightforward possibility to change tasks by adapting the reward function only.\\n\\nThanks for your comment, we made this more explicit in the conclusion section of the paper.\\n\\n**Even a brief, ~1 sentence description of learning from privileged information would be helpful.** We added this to the end of the introduction, thanks!\\n\\n**One of the central questions that is raised in the introduction is this: \\\"What is the right way to incorporate information from different trajectories?\\\". Why \\\"different\\\"?:** \\nThank you for pointing this out. By \\\"different\\\" trajectories, we meant incorporating information from \\\"several\\\" trajectories, not in the sense of simultaneous trajectories but in the sense of several examples each with one (future) trajectory. We had not seen the potential for a misunderstanding there. It is now fixed, thanks!\\n\\nWe are glad you found the work to be significant and its quality high. We look forward to hearing your opinion about the title/framing, and we thank you again for helping us improve the clarity of the paper.\"}",
"{\"title\": \"Comment to all reviewers\", \"comment\": \"We thank all reviewers for their constructive feedback, which greatly helped improve the clarity of the paper.\\n\\nWe want to emphasize that beyond the conceptual framework that we introduced, the method we proposed could be widely applicable to a variety of problems in future work, including medical diagnosis and decision-making.\\n\\nIn an effort to improve reproducibility and to allow potential future work to build on ours, we commit to releasing the code with the final version of the paper.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"**Baselines:** We agree with you, the baselines we chose are fairly simple and established methods. The reasons for our choice were:\\ni) The approach we propose is fairly generic and orthogonal to most techniques used to regularize model-free training.\\nii) The main other method that tries to address a similar problem to ours is Value driven Hindsight Modeling (Guez et al., NeurIPS 2020): while a comparison to this would be interesting, the work is very recent, there is no publicly available source code released at the moment, and it would be hard to obtain fair and sound results when comparing to it.\\nWe focused on analysing the new method we presented empirically, so we decided instead to design several ablation studies, as they provide more insight into how the method performs.\\n\\n\\n**It would be good to add a running example to explain the various concepts and definitions used in the paper:** Thanks for pointing this out. Based on your comment, we added an example of medical decision-making to the manuscript (second and third paragraphs of Section 3.1) and use it to explain the definitions introduced in Section 3.1:\\nThe task is to predict whether a patent will recover under a certain treatment. The observations of the system could be measurements (biopsies, X-ray images, etc.) taken on the patient over time. In this task, we could choose to predict the outcome -- a binary variable -- based on the current and past measurements alone, but in doing so, we would throw away rich explanations that could give more clues about why the patient did or did not recover (pure model-free). On the other hand, modelling the future measurements conditionally on the past and present (pure model-based) might be a challenging task: for example, a medical report written in human language would be extremely hard to predict. The teacher network we propose in LIT can learn to extract the task-relevant information from all future measurements and convert them to a learning signal that guides the student toward better-generalizing solutions.\\n\\nThe 2 typos you pointed out are now fixed, good catch!\\n\\nThank you for your feedback, it helped us improve the presentation of the paper! We are glad you found our learning framework to be novel, that it studies an important problem with many applications and with ample room for follow-up studies that leverage the strengths of both model-free and model-based methods using knowledge distillation techniques.\\n\\nAre there any concerns left that we could address?\"}",
"{\"title\": \"A novel approach to predict labels of dynamical systems\", \"review\": \"This paper proposes a learning framework for predicting the labels of dynamic systems. Unlike existing model-based approaches and model-free approaches, the proposed model takes a middle ground and uses a knowledge distillation-based framework. It uses a teacher model to learn to interpret a trajectory of the dynamic system, and distills target activations for a student model to learn to predict the system label based only on the current observation.\\n\\nExperimental results on both synthetic and simulated datasets confirm the effectiveness of the proposed framework.\", \"pros\": \"1. The paper studies an important problem. Predicting the behavior of a dynamic system has many applications. \\n\\n2. The proposed model is interesting and may lead to a series of follow-up studies that leverage the strengths of both model-free and model-based methods using knowledge distillation techniques.\", \"cons\": \"1. The baseline models are quite simple. There are stronger baselines as noted by the authors. While the proposal is a learning framework, it might still be worth customizing and comparing it with state-of-the-art models in specific problems. \\n\\n2. The presentation of the paper can be improved. It would be good to add a running example to explain the various concepts and definitions used in the paper.\", \"additional_comments\": \"\", \"typo\": \"\\\"a teacher networks\\\" => \\\"a teacher network\\\"; \\\"using only using\\\" => \\\"using only\\\"\\n\\n**Update after author response:** I appreciate the authors' efforts to address my comments. The new version reads better. However, I am still not entirely convinced by the choice of the simple baselines. Since a positive rating is already given, I would keep it unchanged.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting work, but the framing is confusing\", \"review\": \"This paper presents a student-teacher framework, where the teacher network can be used to select and prioritize the relevant properties of the given dynamical system that should be learned by the student.\", \"pros\": [\"(significance) I think the presented framework is a powerful one, and has a potential to be applied broadly to many real-world problems.\", \"(quality) The development of the method is sound and well-motivated. The method was tested on a toy example and then applied to tasks with varying degrees of challenges.\"], \"cons\": [\"(mostly on clarity)\", \"What confuses me the most is the way the authors frames their work. From the current title \\\"Learning to interpret trajectories\\\", and the abstract, it was not really clear to me what the paper is about; I am not sure if the population of people who stop at the title would be the same as the population that finds the contents most interesting. Specifically:\", \"Why is \\\"trajectory\\\" a central keyword for this work? The word trajectory can be used in many different contexts, I don't think it was made clear anywhere in the paper what is the defining features of the \\\"trajectory-ness\\\" that the authors want to emphasize.\", \"What do you mean by \\\"interpret\\\"? This word is being used in a very loose fashion without a clear context; it is empty at best, and misleading at worst.\", \"I get that the proposed framework avoids the *problems* of model-based and model-free methods, but I am having difficulties identifying what *advantages* of the two methods that the framework is incorporating.\", \"One of the central questions that is raised in the introduction is this: \\\"What is the right way to incorporate information from different trajectories?\\\". But I am not sure how this work solves the problem of incorporating *different* trajectories specifically.\"], \"additional_comment\": \"- The core concepts from the previous works that the work is based on, such as *learning using privileged information* or the meta-gradient approach, are not clearly introduced. Even a brief, ~1 sentence description would be helpful.\\n- The ideas of model-based and model-free methods are reinforcement learning concepts, and may not be clear to people who are not in RL. Again some brief description would help.\\n\\nOverall, I have a mixed feeling about this paper and I currently stand between scores 5 and 6. Whereas I find the proposed method interesting, I feel that the lack of clarity, and the confounding of messages in the framing, make this paper rather short of the standard for the conference. \\n\\n**UPDATE:** \\nMy major concerns were addressed in the revised version of the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"well motivated\", \"review\": \"Summary:\\nThis work tries to find a compromise of model-based and model-free methods, using a teacher and student network . The teacher network is trained with meta-gradients. It interprets the trajectories and provides activations for a student network that is supervised for a given task using the current state.\", \"strengths\": [\"The paper is well motivated and solves interesting problem.\", \"The related work is thoroughly reviewed.\"], \"weaknesses\": [\"Some claims made by authors are not validated. I suggest to add relevant citations in Sec.1. These claims support the motivation of this work but are not acceptable without proper references. For example, where is the evidence of deterioration on the tasks that are potentially relevant? Which task do model-free methods achieve substantially better performance?\", \"The evaluation is conducted using the self-generated baselines. Why don't you use existing methods shown in Sec.2 and compare? I cannot find the result of the model-based baseline (as a counterpart of MF).\", \"Typos: using only using model-free methods\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Recommendation to Accept\", \"review\": [\"This paper proposes a teacher-student training scheme to incorporate the useful information of trajectory to improve the predictive performance of model-free methods. The teacher network tries to \\\"guide\\\" the student network at the training stage by presenting an interpretation of the trajectory. The guidance is implemented by adding to the loss function a regularization term that penalizes the \\\"distance\\\" between the teacher's output and the hidden states of the student. The proposed method was tested and compared to other model-free methods.\", \"The study in this work is interesting and important in RL. It tries to tackle the weakness of model-free methods by introducing the dynamics while avoiding to build a full model like the model-based methods. Searching for an optimal tradeoff between the two would benefit the practical uses.\", \"I'd vote for accepting the manuscript if the authors could address my concerns.\", \"As mentioned in the text, the student internal h and the supervision signal h* are not required to have the same dimensionality. Are they required to have the same number of layers or certain correspondence between layers? Why or Why not? Do the authors have general principles on the design of the teacher and the teaching loss?\", \"Is there any particular reason for using classification loss in all examples especially for a typically regression problem? Did regression loss lead to bad performance?\", \"In the toy XOR example, the particular design of x* seems to play the key role. It is not the typical trajectory of x. This hand-crafted data does not help with demonstrating the teacher learning the powerful interpretation itself.\", \"Though the authors have talked about why not to compare to a model-based method, I do not think it is convincing. As mentioned in the intro section, the model-based methods fail due to partial observability and etc, but I do not see the examples in this study have such issues. The computation of model-based methods depends on the complexity of internal model rather than the task. The argument should then be if the proposed method outperform model-based methods given the same complexity (e.g. the number of parameters) or same amount of data.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
8Xi5MLFE_IW | Episodic Memory for Learning Subjective-Timescale Models | [
"Alexey Zakharov",
"Matthew Crosby",
"Zafeirios Fountas"
] | In model-based learning, an agent’s model is commonly defined over transitions between consecutive states of an environment even though planning often requires reasoning over multi-step timescales, with intermediate states either unnecessary, or worse, accumulating prediction error. In contrast, intelligent behaviour in biological organisms is characterised by the ability to plan over varying temporal scales depending on the context. Inspired by the recent works on human time perception, we devise a novel approach to learning a transition dynamics model, based on the sequences of episodic memories that define the agent's subjective timescale – over which it learns world dynamics and over which future planning is performed. We implement this in the framework of active inference and demonstrate that the resulting subjective-timescale model (STM) can systematically vary the temporal extent of its predictions while preserving the same computational efficiency. Additionally, we show that STM predictions are more likely to introduce future salient events (for example new objects coming into view), incentivising exploration of new areas of the environment. As a result, STM produces more informative action-conditioned roll-outs that assist the agent in making better decisions. We validate significant improvement in our STM agent's performance in the Animal-AI environment against a baseline system, trained using the environment's objective-timescale dynamics. | [
"Episodic Memory",
"Time Perception",
"Active Inference",
"Model-based Reinforcement Learning"
] | Reject | https://openreview.net/pdf?id=8Xi5MLFE_IW | https://openreview.net/forum?id=8Xi5MLFE_IW | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"JBtxyeIW1oq",
"NCfucwPRhQB",
"eVGwX08Oxci",
"cFFeSylRBv8",
"maDbZ4WArZl",
"9PqNuSM43Bc",
"MpjErqasb0a",
"Dae7V0e2zBb",
"AyGJlO-Vzsl",
"Ya2jyJegbdK",
"ksmtpkZvju7"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040512239,
1606039697634,
1605788022311,
1605561396873,
1605561306521,
1605560671523,
1605560374455,
1605558981063,
1603891235630,
1603881653035,
1603673514320
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3752/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3752/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3752/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3752/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3752/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3752/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3752/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3752/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3752/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3752/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper uses a free-energy formulation to develop an approach to learning \\\"jumpy\\\" transition models, which predict surprising future states. This transition model is used in combination with MCTS and applied to a scavenging task in the Animal AI Olympics, outperforming two baselines.\\n\\nWhile the reviewers praised the importance of the problem tackled, and the novelty of using a free energy approach, there was a general sense amongst the reviewers that the paper wasn't totally clear (especially for an RL audience). R1 also felt that some of the claims of the paper weren't sufficiently evaluated enough, and several reviewers indicated that they felt the baselines were insufficient (or, at a minimum, not described in enough detail to evaluate whether they were sufficient). Given these points, I feel the paper is not quite ready for publication at ICLR. I encourage the authors to flesh out their analysis a bit more, better describe the baselines (and possibly compare to other existing approaches as mentioned by R4), and overall to frame the paper a bit more for the RL community.\", \"one_additional_reference_the_authors_may_be_interested_in\": \"Gregor et al (2018). Temporal difference variational auto-encoder.\"}",
"{\"title\": \"Reply\", \"comment\": \"**Q1**: _we would kindly want to ask you to clarify a little more on this concern, particularly with respect to the connection with active inference not being \\u201cthat strong\\u201d_ : There are more than 8 references to work by Friston et al., but it's unclear to me how relevant all these references are.\\n\\n**Q2**: _We were unable to locate the Darajaman et al. in the proceedings of ICML 2019. Could you please link the paper you are referencing?_ my mistake, the second reference was supposed to be Jayaraman et al, ICLR 2019 (TIME-AGNOSTIC PREDICTION: PREDICTING PREDICTABLE VIDEO FRAMES).\\n\\n**Q5**: Mine was just a vague suggestion, not meant to be addressed in this work. What I mean is, can the agent learn a subjective scale for actions, somehow similar to how it should do for states? Keeping the actions fine-grained might make most of the advantages of the subjective time-scale for states disappear. The heuristic you introduce helps to fix the issue, but it seems too strong to be meaningful (e.g. no U turns, sequences of turns, etc.).\\n\\n**Q8**: I am still not convinced that this environment is sufficient to showcase long horizons plans. Especially with the very coarse action heuristic in place I don't see how the agent can plan long-term, if it doesn't even know what actions it executed. This is consistent with the fact that the simulated plans tend to be not realistic and of worse quality than the standard planner.\\nAll in all, I find this experiment with Animal AI not convincing, and it's a pity because the idea is interesting and deserves a solid evaluation.\\n\\n**Q9**: _We believe this issue can be addressed by training the dynamics model for longer_ This needs to be shown empirically, given the analysis in the paper right now I don't see this would be a given.\"}",
"{\"title\": \"Reply to Authors\", \"comment\": \"The authors' response to Q2 makes sense overall and is convincing, thanks. In a bit more depth:\\n\\n< Our baseline is a deep active inference agent devised by Fountas et al. (NeurIPS 2020), which was compared to several model-free RL agents, such as DQN, A2C, and PPO2. We believe it can be considered the current state of the art for deep active inference agents. >\\n\\nThat's good to know. If the baseline performs similarly to those deep RL agents, specifically on this scavenging task, then knowing about that result would help me as reader with a deep RL background to situate the results of the present paper. The strongest evidence might be e.g. a plot showing DQN, A2C, PPO, and the free energy baseline performing similarly, then STM outperforming all of them. That might be a lot of work, so if there are scavenging tasks in Fountas et al 2020 on which the baseline performs favorably in comparison to DQN etc, then mentioning that in the present paper would go a long way.\\n\\n< We again emphasise that the baseline and STM agent share the weights of all the networks, except for the transition dynamics model. >\\n\\nI must have missed this when reading through. If this rules out alternative explanations for STM's improved performance, it might be worth unpacking that logic for the reader. As a sidenote, I'd have thought that sharing parameters like this would inhibit maximal performance for both architectures.\\n\\nAnswers to Q3-Q5 also make sense, I'll look forward to the next version of the manuscript. One point in more detail:\\n\\n< We did experiment with both longer and shorter roll-outs for the time-locked agent, and found that performing longer roll-outs does not result in a better performance \\u2013 likely related to the problem of error accumulation for one-step prediction models (as can be observed in Figure 4). > \\n\\nThat's good to know. I think it would be convincing if the paper showed that: specifically, if you roll out the time-locked model to the longest environment timestep STM predicts, you get much worse performance. It might seem obvious given the assumption (which I think the authors hold from experience) that the time-locked model suffers severely from accumulating errors. But, for the uninitiated reader, something like this could serve as a quick sanity check and convincing demonstration of STM's usefulness.\"}",
"{\"title\": \"In response to AnonReviewer3 (part 2)\", \"comment\": \"**Q7**: *\\u201cThe experimental results do not provide enough information to understand what tasks can be solved and what cannot be solved in Animal AI environment\\u201d*\\n**A**: We have tested our agents on a number of tasks from the official 2019 Animal-AI Competition; however, we decided not to report these, as we believe it would distract the reader from the main theme of the paper. It is also important to mention the apparent difficulty of testing active inference agents on reward-based tasks (which is characteristic to the testing of RL agents) given that the agent cannot learn from the reward in the same way. We broadly follow the discussion in Fountas et al. Neurips 2020 [1], which further expands on this point. \\n\\n**Q8**: *\\u201cthe reward is going up; how far can the reward go?\\u201d*\\n**A**: The figure shows the cumulative reward from 100,000 randomly-generated environments, rather than average reward per episode.\\n\\n**Q9**: *\\u201chow is the reward computed?\\u201d*\\n**A**: The reward is provided by the environment: ~5 for reaching a yellow/green sphere, ~(-5) for reaching a red sphere, and a constant negative reward of ~(-0.01) at every timestep.\\n\\n**Q10**: *\\u201cis there a ground truth trajectory comparison?\\u201d*\\n**A**: We believe it is not informative for this particular figure, as the most interesting beneficial property of the STM agent is that it predicts imagined salient events (i.e. not necessarily corresponding to the ground truth - which cannot be predicted in this case), and also because the two sequences play out over different timescales. The figure shows the difference between our STM agent (which is able to imagine salient/rare events in its roll-outs) and the baseline agent (which simply predicts the most likely next frame). The ground truth is given in the other figures, however (4, 8).\\n\\n**Q11**: *\\\"I would appreciate more background and intuition on each term in the variational free energy formula.\\\"*\\n**A**: Thank you for the suggestion \\u2013 this seems to be a common wish. We will extend the appendix explanation of active inference. \\n\\nThank you again for your review.\\n\\nBest regards.\\n\\n[1] https://proceedings.neurips.cc/paper/2020/hash/865dfbde8a344b44095495f3591f7407-Abstract.html\"}",
"{\"title\": \"In response to AnonReviewer3 (part 1)\", \"comment\": \"Dear AnonReviewer3,\\n\\nThank you for your review. We are happy to address the concerns you outlined above. \\n\\n**Q1**: *\\u201cThe main sections do not contain sufficient information regarding how the actions are obtained from learned STM models.\\u201d* + *\\u201chow MCTS and MPC baselines differ\\u201d*\\n**A**: We understand your concern here. We will extend these explanations in the next version of the manuscript. \\n\\n**Q2**: *\\u201chow do you recover low-level action sequences\\u201d*\\n**A**: Our model does not involve a hierarchical approach for retrieving action sequences. While we acknowledge that this may seem as a disadvantage (since there is no one-to-one correspondence in the predicted future states and the ground-truth states), the retrieval of low-level sequences of actions is not necessary to observe a visible improvement in the system\\u2019s performance. As mentioned in another response, we are now working on more sophisticated methods for retrieving useful long-term action sequences and hope this will improve the results further.\\n\\n**Q3**: *\\u201cIt is unclear how using the KL divergence for measuring surprise will improve over model's prediction error studied in previous work.\\u201d*\\n**A**: Could you please let us know some of the exact references you are referring to so we can better investigate this point. As we understand it, KL divergence (in this case, variational free energy of the dynamics model) can also be considered to be the prediction error produced by the probabilistic transition dynamics model. Within active inference, it is used to quantify a measure of belief updating and thus, loosely speaking, quantifies the prediction error, as well. As we mentioned in the paper, empirical evidence from neuroscience suggests that prediction error and event saliency is one of the decisive factors on whether an episodic memory will be formed.\\n\\n**Q4**: *\\u201cHow sensitive is it to the KL threshold?\\u201d*\\n**A**: Thank you for raising this point, we agree with you that more should be said about the threshold value choice. Currently, the threshold was treated as a hyperparameter of the model and was manually chosen by inspecting the distribution of transition model surprise values. \\n\\n**Q5**: *\\u201cunfair to compare against the baselines which do not have access to the same information.\\u201d*\\n**A**: We are not entirely sure what you mean by this. Both the baseline and the STM agents have access to exactly the same amount of information. The action sequence aggregation is formed out of the actions performed by the agent, which the objective timescale agent also has access to (though of course stored in the objective timescale).\\n\\n**Q6**: *\\u201cit would be helpful to have more than 1 environment to show the generality of the approach.\\u201d*\\n**A**: We certainly agree with this. However, we believe that the presented results do show that the idea is sound and strongly suggest that it has potential in other domains. We are currently working on further experiments across more benchmarks/environments.\\n\\nTo be continued in the next post...\"}",
"{\"title\": \"In response to AnonReviewer4 (part 2)\", \"comment\": \"**Q9**: *\\u201c[Figure 10] \\u2026 imagined roll-outs are not very consistent, as in almost every single case the color of the spheres changes from green to yellow\\u201d*\\n**A**: The observed interchange between the yellow and green colours was attributed to the fact that the STM was only trained for 250k iterations in the presence of close clustering of yellow and green colour in the latent space by the regularised VAE. We believe this issue can be addressed by training the dynamics model for longer, as this problem was not observed in the objective-timescale agent, which shared the same weights for its VAE and which had its dynamics model trained for about 750k iterations. With Figure 10, however, we wanted to draw your attention to the apparent variability in the temporal extent of the agent\\u2019s predictions, as well the consistency in the agent\\u2019s action-conditioned predictions. For instance, Roll-out #1 shows an initially rapid forward approach to the sphere, which later slows down, while Roll-out #2 correctly predicts observing a green sphere when turning left. We encourage the readers of this paper to think of the subjective-timescale model as not strictly speaking a conventional dynamics model that is attempting to model the true physics or the most likely next state given an action. Rather, the STM will guide the agent by means of predicting salient events over varying time intervals. Nevertheless, we also agree that our results could be made more consistent and visually convincing, and we thank you for pointing to some of these issues. \\n\\n**Q10**: *\\u201c[Figure 11] \\u2026 objects appear out of nowhere instead of smoothly while turning\\u201d*\\n**A**: We believe this is consistent with what we would expect the STM to predict. This is because episodic memories contain \\u2018surprising\\u2019 events, which we categorise into two main categories (Section 4.1, Paragraph 3) \\u2013 epistemic and model-imperfection surprise. As a result of the former, the episodic memory buffer will contain events like spheres appearing in the frame of view when the agent turns. Furthermore, these events do not need to happen \\u2018smoothly\\u2019, because the surprise threshold is fixed at a specific value, below which the agent will not record a memory. We see it as a positive feature of the model in that it skips the less surprising smooth introduction of new objects at the edges.\\n\\n**Q11**: *\\u201c[Figure 12] \\u2026 it appears to me that the physics is significantly more consistent than with STM\\u201d*\\n**A**: While it is true that the objective-timescale model is better at modelling some parts of the physics of the environment, in this paper we argue that that is not what the internal model must necessarily be good at, and can actually be its disadvantage (no imagination-driven exploration, more prone to get stuck in a sub-optimal state). Instead, we train the STM to predict future salient events, driving the agent to places where they are more likely to occur. Nevertheless, STM retains the ability to correctly predict in the short-term, which does not compromise the agent\\u2019s ability to get the reward (or avoid a negative reward) when it is close to one. \\n\\n**Q12**: *\\u201cThe last sentence in the conclusion seems to suggest that the model is not progressively expanding its horizon the more it trains. How come?\\u201d*\\n**A**: Great question. This is because the objective-timescale transition model used to collect memories is pre-trained and frozen. We plan to explore using a single transition model in our future work, for which we would expect this behaviour. \\n\\n**Q13**: *\\u201clack of established baselines and benchmarks\\u201d*\\n**A**: Please refer to our answer in Q8.\\n\\n**Q14**: *\\u201can algorithm box to present the method\\u201d*\\n**A**: Thank you for the suggestion \\u2013 we will add this.\\n\\nThank you once again for your thoughtful review. \\n\\nBest regards.\"}",
"{\"title\": \"In response AnonReviewer4 (part 1)\", \"comment\": \"Dear AnonReviewer4,\\n\\nThank you for your comments and feedback. We would love to address the issues you bring up in your review. \\n\\n**Q1**: *\\u201cI am not sure I see the connection with active inference as being that strong or even necessary\\u201d*\\n**A**: Active inference is a model-based cognitive framework that we chose mainly for the reasons of biological plausibility and its intrinsic Bayesian nature. This, as well, is consistent with the Bayesian predictive processing model for time perception used in Fountas et al. (2020) [1]. Regardless, we would kindly want to ask you to clarify a little more on this concern, particularly with respect to the connection with active inference not being \\u201cthat strong\\u201d. \\n\\n**Q2**: *\\u201cThere are two potentially relevant references [...]\\u201d*\\n**A**: Thank you for these! However, we were unable to locate the Darajaman et al. in the proceedings of ICML 2019. Could you please link the paper you are referencing?\\n\\n**Q3**: *\\u201cShouldn\\u2019t* $q(a_t; \\\\phi_a)$ *be a function of s as well?\\u201d*\\n**A**: You are correct in saying that it is also a function of s. Here, however, we follow standard notation from variational inference which drops the conditioning, when denoting the approximate posterior. We chose to stick with the conventions in the literature, but agree that writing it out in full may be more clear.\\n\\n**Q4**: *\\u201cThe angle heuristic might deserve a more in-depth discussion. What if the agent goes in a circle? Or does a U turn? Or a complex sequence of movements in a maze?\\u201d*\\n**A**: Absolutely! We certainly agree with you that the angle heuristic deserves more attention, and we plan to address it more concretely in our future work. The purpose of this paper, however, was to showcase the usefulness of defining a subjective timescale as the top priority. Therefore, we chose a very simple heuristic that was necessary to provide the agent with information to learn action-conditioned predictions. This is not to say that they are perfect and we would expect them to get worse in more complex configurations of the environment; however, we believe that this heuristic was enough to demonstrate the effectiveness of subjective-timescale models. \\n\\n**Q5**: *\\u201cCould one think of applying the same STM principle that is already applied to states, to actions as well?\\u201d*\\n**A**: Could you please clarify the suggestion as there are many ways this might be possible, and it's unclear to us exactly how you are imagining this to work. \\n\\n**Q6**: *\\u201cWhere does $f_{\\\\theta_s}$ appear in this context?\\u201d*\\n**A**: Thank you for pointing it out \\u2013 it is a typo and should be $f_{\\\\theta_h}$, instead. \\n\\n**Q7**: *\\u201cPerhaps the author could add (even in the appendix) a top view of a typical setting that corresponds to one episode?\\u201d*\\n**A**: Definitely, thank you for the suggestion.\\n\\n**Q8**: *\\u201cthe setting chosen by the authors does not reflect the characteristics that they planned to showcase with their method\\u2026 . [...] how can we appreciate that the agent learns to do planning over long horizons?\\u201d*\\n**A**: Although it may seem that the environment is small, we argue that the most important bit is the underlying temporal dynamics \\u2013 e.g. how much does an agent progress forward given a forward action? In AAI, it is very slow, requiring the agent to take hundreds of steps (>500) to go to the opposite side of the sandbox. Furthermore, sparse rewards were chosen deliberately to encourage both long- and short-term planning. This becomes even more challenging, as our agents were only allowed to take 500 steps, after which the environment would terminate and the next configuration would be chosen. Therefore, we believe that the chosen set-up, on the contrary, helps with testing our agents for both short- and long-term planning abilities. Now that the idea has been shown to be successful we plan to address other benchmarks/environments that are likely to display more traditional long-term planning behaviour.\\n\\nTo be continued in the next post...\\n\\n[1] https://www.biorxiv.org/content/10.1101/2020.02.17.953133v1\"}",
"{\"title\": \"In response to AnonReviewer1\", \"comment\": \"Dear AnonReviewer1,\\n\\nThank you for your thoughtful feedback \\u2013 you raise some important points, which we would be happy to address. \\n\\n**Q1**: *\\u201c[...] this difference in demonstrated scale might simply be due to the massive difference in resources that have been devoted to deep RL agents vs active inference agents.\\u201d*\\n\\n**A**: We certainly agree with this statement and believe that deep active inference research could greatly benefit from more computational resources and efforts of scaling it further. \\n\\n\\n**Q2**: *\\u201cI'm basically trusting the authors that theirs is a reasonably strong baseline\\u201d*\\n\\n**A**: Our baseline is a deep active inference agent devised by Fountas et al. (NeurIPS 2020), which was compared to several model-free RL agents, such as DQN, A2C, and PPO2. We believe it can be considered the current state of the art for deep active inference agents. Furthermore, the parameters of this baseline were tuned to yield better performance. We would also like to stress that the only difference between the baseline and our STM agent is the transition model. As such, we can much more confidently attribute the improvement in the performance to the subjective-timescale model. \\n\\n\\nMoving on, you made several points about being uncertain whether the STM agent\\u2019s better performance is indeed due to the subjective-timescale model. Accordingly, we plan to address these in the next version of the manuscript and we thank you for pointing it out. Nevertheless, in response to your concern, we again emphasise that the baseline and STM agent share the weights of all the networks, except for the transition dynamics model. This was done deliberately to address the exact point that you raise. \\n\\n\\n**Q3**: *\\u201cAre these cherry picked examples, or are they actually reflective of general trends?\\u201d*\\n\\n**A**: These are indeed the general trends that we observed by performing random roll-outs in a variety of different settings. For more examples of random roll-outs, please see Figures 10 and 11 in the Appendix.\\n\\nNevertheless, we believe your concerns about the analysis of the results, such as demonstrating *\\u201caggregate results\\u201d* and *\\u201canalyz[ing] the MCTS search trees\\u201d*, are well-justified. We are currently working towards creating more convincing quantitative and qualitative metrics, by which these improvements could be judged. \\n\\n\\n**Q4**: *\\u201c[With regards to baseline agents getting stuck in sub-optimal states], there's no evidence in the main paper for this.\\u201d*\\n\\n**A**: This observation was based on numerous runs we performed and analysed qualitatively. We can find ways to support this observation more quantitatively in the next version.\\n\\n\\n**Q5**: Further, you mention that the hypothesis that *\\u201c[...] the time-locked agent fails simply because it's rollouts aren't long enough to find the rewarding object\\u201d* is not ruled out by the analysis.\\n\\n**A**: There are several points to address here: \\n(1) We did experiment with both longer and shorter roll-outs for the time-locked agent, and found that performing longer roll-outs does not result in a better performance \\u2013 likely related to the problem of error accumulation for one-step prediction models (as can be observed in Figure 4). \\n(2) Time-locked models can indeed yield roll-outs that are not long enough; however, it is exactly one of the issues that our STM model can effectively address \\u2013 but do so without resorting to any explicit mechanisms of varying the temporal extent of predictions. \\n(3) Longer roll-outs result in higher computational complexity of the planning process \\u2013 something that is addressed via the subjective-timescale modelling. \\n(4) As shown in Figures 5 and 11, STM agents can additionally imagine objects (affordances that would allow for optimal minimisation of the free energy), which is in stark contrast to the objective-timescale agent in Figures 5 and 12. We argue that this systematic characteristic allows the STM agent to have richer information about its environment (potential affordances), encouraging imagination-driven exploration, while the time-locked agent is deprived of such ability. \\n\\n\\nWe again thank you for your useful feedback, and we will continue working on improving our paper in the meantime. \\n\\nBest regards.\"}",
"{\"title\": \"Important problem and interesting agent, but needs more in-depth analysis.\", \"review\": \"In this paper, the authors describe a variable-timescale prediction model for planning in the context of a deep active inference agent. They show that this agent outperforms a baseline in a scavenging task in a 3D first person environment. They show example rollouts of the baseline and variable-timescale models.\\n\\nUntil reading this paper I wasn't familiar with deep active inference agents, which apparently enable the extension of free energy methods to more complex settings. It seems like an intriguing alternative to deep RL. It's not clear to me whether it has the scaling potential that has been demonstrated for deep RL. Although the experiments reported here are larger scale than experiments with active inference systems prior to Fountas et al. (2020), they seem to be quite simple in comparison to tasks on which deep RL agents excel (e.g. typical Vizdoom and DMLab tasks). I don't mean this as a criticism, but instead as a question mark: this difference in demonstrated scale might simply be due to the massive difference in resources that have been devoted to deep RL agents vs active inference agents. \\n\\nAs a result, I tried to evaluate the paper on its merits in the context of active inference systems, instead of via comparison with deep RL. As such, I'd be happy with a convincing demonstration that (1) the variable timescale model outperforms a strong time-locked model in the scavenging task and (2) that its performance is due to the selection of useful frames for planning that MCTS then uses effectively.\\n\\nFigure 3 seems to answer (1) in the affirmative. I'm basically trusting the authors that theirs is a reasonably strong baseline, since I don't have experience training active inference agents or with this scavenging environment. This will be reflected in my confidence score.\\n\\nHowever, (2) is not substantiated well by the paper's analysis section. The only evidence for this is in the form of two pairs of example rollouts for the time-locked and variable-time models. Are these cherry picked examples, or are they actually reflective of general trends? I'd be much more comfortable if the authors supplied some aggregate results to substantiate this claim:\\n\\n\\\"As a result, our agent consistently predicts farther into the future in the absence of any nearby objects, and slows its timescale, predicting at finer temporal rate, when the objects are close.\\\" \\n\\nI'd really need aggregate results demonstrating that the S-sequences are summarizing long trajectories in a sensible way over a large set over episodes, to feel confident that the system is providing the benefits its purported to. Even better would be to analyze the MCTS search trees to show that the search trajectories over S-seqences have desirable properties.\\n\\nPerhaps more concerningly, the authors say that \\\"As a result, the STM agent is less prone to get stuck in a sub-optimal state, which was commonly observed in the baseline system, and is more inclined to explore the environment beyond its current position\\\". But, as far as I can tell, there's no evidence in the main paper for this. \\n\\nOne concrete concern is that the time-locked agent fails simply because it's rollouts aren't long enough to find the rewarding object. Maybe it would be sufficient to simply randomly drop out timesteps from the trajectory to reach the STM-MCTS performance level. The current analyses (and baseline results) don't rule out this hypothesis.\\n\\nIf the authors can provide stronger evidence on these points, I'd be very happy to increase my rating.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Subjective time perception is a nice motivation. Execution could be significantly improved.\", \"review\": \"Summary: The authors propose to train a model not on the objective time-scale of a sequence of frames, but on the subjective time-scale dictated by how surprising events are (where surprise here is defined as being above a certain energy free threshold). The trained action-conditioned model learns to slow down time for complex scenes, and fast forward when things are easily predicted.\\n\\nThe overall topic is an important one, most model based methods suffer from accumulating errors.\\nThe introduction is well written and offers a strong motivation for the rest of the paper. I like the explanation in terms of a distinction between objective and subjective perception of time and events.\\n\\nThere are lots and lots of references to work by Friston et al., but I am not sure I see the connection with active inference as being that strong or even necessary.\\n\\nRegarding the part \\u201cFurthermore, for long-term predictions STM systematically performs temporal jumps (skipping intermediary steps), thus providing more informative future predictions and reducing the detrimental effects of one-step prediction error accumulation.\\u201d There are two potentially relevant references (Neitz et al NeurIPS 2018), and (Darajaman et al, ICML 2019), as both learn models (not necessarily action-conditioned though) that can skip an adaptive number of steps into the future, with similar consequences (i.e. preventing error accumulation and increasing rollout speed).\", \"p4\": \"\\u201cThe habitual network acts as a model-free component of the system, learning to map inferred states directly to actions\\u201d. Shouldn\\u2019t $q(a_t; \\u03c6_a)$ be a function of $s$ as well?\", \"p6\": \"The angle heuristic might deserve a more in-depth discussion. What if the agent goes in a circle? Or does a U turn? Or a complex sequence of movements in a maze?\\nSince the goal of STMs is (at least in part) to reduce progressively the length of S-sequences (such that they start spanning longer and longer horizons), the actions that bridge two episodic memory need to be summarised in a way that is expressive enough.\\nCould one think of applying the same STM principle that is already applied to states, to actions as well?\\n\\n\\u201cImportantly, [the] function $f_{\\\\theta_{s}}$ is deterministic and serves only to encode information about preceding\\u2026\\u201d. Where does $f_{\\\\theta_{s}}$ appear in this context? I can\\u2019t see it anywhere between eq. 4 and 6\\n\\nThis AAI environment is not so established. Perhaps the author could add (even in the appendix) a top view of a typical setting that corresponds to one episode? \\nAlso, it appears that the setting chosen by the authors does not reflect the characteristics that they planned to showcase with their method. Given the presence of one sphere per colour and a relatively small environment, how can we appreciate that the agent learns to do planning over long horizons?\\nPerhaps an environment with longer horizons would be a more adequate testbed?\\n\\nI very much liked that the authors showed the additional results in Appendix C, I think they are extremely important for the paper. However, I disagree with the claims made in the text, as it appears that they are not substantiated by the figures they reference.\\n- \\u201cFigure 10: random roll-outs generated by the system. These diverse roll-outs demonstrate that STM is able to: i) make correct action-conditioned predictions, ii) speed up its prediction timescale when objects are far away, iii) slow down the prediction timescale when objects are nearby\\u201d\\nIf I interpret Figure 10 correctly, it seems to suggest that the imagined roll-outs are not very consistent, as in almost every single case the color of the spheres changes from green to yellow and vice-versa during the rollout.\\n- Figure 11: several transitions appear not to be realistic, objects appear out of nowhere instead of smoothly while turning.\\n- Figure 12: while it is true that the baseline model does not generate any sphere, it appears to me that the physics is significantly more consistent than with STM.\\n\\nThe last sentence in the conclusion seems to suggest that the model is not progressively expanding its horizon the more it trains. How come? \\n\\nOverall, my impression is that the paper has a very interesting motivation, but the execution could be significantly improved. The results are not so convincing, due to:\\n1) figures (10-11-12) that do not soundly corroborate the claims, \\n2) evaluation setting that does not allow to really test the claim (i.e. not enough opportunities for increasingly longer horizons as the model improves) \\n3) the lack of established baselines and benchmarks\\n4) the lack of an algorithm box to present the method\", \"minor\": [\"Simplify language where possible (utilised -> used; are capable of -> can; etc.)\", \"Several figures (4,5, 8) are corrupted using preview on a Mac (not sure if it\\u2019s just my computer). I could see them correctly by using Chrome to open the PDF.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting problem and idea, but a little more work to be done\", \"review\": \"Summary:\\n\\nMost model-based RL algorithms learn dynamics models that predicts the next timestep. However, because of model-bias, frequency of timesteps, and objective timescales, the dynamics models can accumulate errors and limited by timescales. The authors propose subjective-timescale model (STM) that instead of predicting the next timesteps they find the \\\"surprising\\\" subsequences of the trajectories and learn temporal-skipping dynamics models over them. The paper shows the improvement over single-step prediction baselines in a first-person navigation domain.\", \"pros\": \"The method aims to address a very important problem in model-based RL. \\n\\nThe idea of using variational free energy with model-based RL seems novel to me, and has not been widely explored.\\n\\nThe qualitative visualization (figure 4 and 5) provides a nice understanding of what the method is doing as well as what it is capable of in first-person navigation.\", \"cons\": \"-- Methods --\\n\\nThe main sections do not contain sufficient information regarding how the actions are obtained from learned STM models. I find one paragraph in section 3 in sufficient. The experimental sections also do not mention how MCTS and MPC baselines differ. Please clarify. Also, how do you recover low-level action sequences from the aggregated actions after MCTS? I do not find the answer from the paper.\\n\\nThe keyframe selection method requires more justification. It is unclear how using the KL divergence for measuring surprise will improve over model's prediction error studied in previous work.\\nIt would be amazing to provide a theoretical justification for the heurstics toward, e.g., saying something about the end task performance, if possible. How sensitive is it to the KL threshold? Please provide this study.\\n\\nThe action sequence aggregation is domain specific which seems a bit unfair to compare against the baselines which **do not** have access to the same information. There should be more baselines or ablation studies to disentangle the improvement of the method from this domain-specific assumption.\\n\\nThe paper uses different indexing styles which make the method more confusing than it should have been. Please choose one between indexing tau or arithematics on tau.\\n\\n-- Experiments --\\n\\nAs metioned briefly before, more baselines or ablation will be critical to judge the importance of the proposed model? What about compare against other sliency approaches such as prediction error for memory accumulation? Also, it would be helpful to have more than 1 environment to show the genrality of the approach.\\n\\nThe experimental results do not provide enough information to understand what tasks can be solved and what cannot be solved in Animal AI environment. Can the we provide the success rates and categorized by difficulty levels? These information will be helpful in understanding what STM can and cannot do. Perhaps, having a link to some videos will also help.\\n\\nFigure 3 shows that the reward is going up; how far can the rward go? It is still increasing. \\n\\nIn figure 3, how is the reward computed?\\n\\nIn figure 5, is there a groundturth trajectory comparison?\\n\\n\\n-- Others --\\n\\nPersonally, I would appreciate more background and intuition on each term in the variational free energy formula.\", \"conclusion\": \"Again, I believe that this work is addressing a very important question with an interesting idea, but it may require a little bit more work to make the case. I appreciate the authors thinking about this problem, and hope the authors are encouraged to continue their work.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
USCNapootw | Certify or Predict: Boosting Certified Robustness with Compositional Architectures | [
"Mark Niklas Mueller",
"Mislav Balunovic",
"Martin Vechev"
] | A core challenge with existing certified defense mechanisms is that while they improve certified robustness, they also tend to drastically decrease natural accuracy, making it difficult to use these methods in practice. In this work, we propose a new architecture which addresses this challenge and enables one to boost the certified robustness of any state-of-the-art deep network, while controlling the overall accuracy loss, without requiring retraining. The key idea is to combine this model with a (smaller) certified network where at inference time, an adaptive selection mechanism decides on the network to process the input sample. The approach is compositional: one can combine any pair of state-of-the-art (e.g., EfficientNet or ResNet) and certified networks, without restriction. The resulting architecture enables much higher natural accuracy than previously possible with certified defenses alone, while substantially boosting the certified robustness of deep networks. We demonstrate the effectiveness of this adaptive approach on a variety of datasets and architectures. For instance, on CIFAR-10 with an $\ell_\infty$ perturbation of 2/255, we are the first to obtain a high natural accuracy (90.1%) with non-trivial certified robustness (27.5%). Notably, prior state-of-the-art methods incur a substantial drop in accuracy for a similar certified robustness. | [
"Provable Robustness",
"Network Architecture",
"Robustness",
"Adversarial Accuracy",
"Certified Robustness"
] | Accept (Poster) | https://openreview.net/pdf?id=USCNapootw | https://openreview.net/forum?id=USCNapootw | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"WJpj1hcLN-S",
"W6bJRvAwNVN",
"beb1PKgrvCW",
"2naPFOoYLEk",
"v7yTgwfw8V8",
"SGeaqfes9h4",
"xOyhHMNyCRn",
"7voN2mJsVEh",
"7Q7DDH3LW-U",
"LfYE1jxXsbH",
"k79_rDr_MB",
"jqyabsZrnRw",
"oL6YjHo11Eh",
"7FBOoprwMr",
"XNHtWVozeEr",
"tu6qtuPNWKd",
"u_nDhlZA5Qr",
"uKNh_jJQxrE",
"_TSie5EUUmx",
"aDfRDUFCtg",
"f9zniUAE9E",
"MI72dQDfmd",
"ER0nvrQIU5r"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040354051,
1606305082343,
1606289562038,
1606221319693,
1606221207658,
1606221053380,
1606220980054,
1606220594732,
1606220481539,
1606198574138,
1606197977723,
1606197587748,
1606193154087,
1606192693931,
1606127723153,
1605645701966,
1605645652382,
1605645374957,
1605645304879,
1605645224260,
1603940502394,
1603895974010,
1603844135431
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3751/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3751/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3751/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3751/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3751/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3751/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3751/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3751/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3751/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3751/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3751/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3751/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3751/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3751/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3751/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3751/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3751/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3751/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3751/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3751/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3751/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3751/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a selection mechanism to choose between a certified model with low clean accuracy and a naturally trained model with high accuracy, to improve the standard clean accuracy for certifiably robust models. At a high-level, the idea behind this combined system is that when the certified model cannot certify, one should avoid using it for classification, but rather should use a naturally trained model. A state-of-the-art naturally trained networks is used as the \\\"core network\\\", and a small certification network with high certifiable robustness is used as the \\\"certification network\\\". The major contribution is a selection network that adaptively chooses between these two networks.\\n\\nPro\\n+ The idea of using two networks adaptively is novel. The proposed selection mechanism has been shown to be able to combine the merits of both networks to obtain better natural accuracy with good certified robustness. \\n\\n\\nCon\\n- The experiment section still has room for improvement. Specifically, the presentation of the results were not convincingly conveying the tradeoff between the clean accuracy and the certified accuracy. After the rebuttal, the authors made some improvements that addressed many of the concerns about the clarity and reproducibility issues. However, reviewers suggest further polishing the experiment section. \\n\\nOverall, I think the novelty of the paper combined with the promising results achieved outweigh the presentation issues. I would recommend accepting this paper.\"}",
"{\"title\": \"Response to final comments by Reviewer3\", \"comment\": \"We thank the reviewer for their valuable suggestions. We updated the paper and answered the questions below:\", \"q\": \"Can you add the missing purple triangle in Figure 4?\\n\\n-> Yes, we have done so.\"}",
"{\"title\": \"Thank you for the response. Please make sure to take more time to polish the paper and address the issues below in final version.\", \"comment\": \"Thank you for the response, and the results are presented in a better way now in the last revision. Only some issues left:\\n\\n1. In Figure 7 (now in appendix) the CROWN-IBP (Conv5) + ACE results seem still missing, so this figure is not consistent as Figure 2 and 3. In each of the figure, we should have both ACE models and existing models (using the same certification network) under different weights (kappa) of natural loss. Please make sure to add it in the final revision.\\n\\n2. In Table 1 and 4, it can be more clear if two columns are presented, one showing the training method of certification network (COLT/CROWN-IBP/LiRPA) and one showing the ACE training method (COLT/IBP).\\n\\n3. Despite the arguments made by the reviewer, I think it is still better to add some results using naturally trained models at least for one setting (e.g., CIFAR eps=8/255), because future works may use this paper as a baseline and using naturally trained models is an important setting.\\n\\n4. The purple triangle on Figure 4 is missing.\\n\\nThank you for explaining to me the difference between COLT/CROWN-IBP and (Xu et al.). It is now clear to me.\\n\\nBased on the new revision, I have increased to score to 6 (assuming the authors will address the remaining issues above). I believe the paper still has room for improvements so cannot further increase my score. Please make sure to take more time to polish the paper and address the issues above.\"}",
"{\"title\": \"Full Standardization of Naming\", \"comment\": \"We thank the reviewer for this note and have now fully standardized the naming and removed any mention of ConvMedBig.\"}",
"{\"title\": \"Computing Adversarial Accuracy and Gradient Masking Concerns\", \"comment\": \"We include the explanation on how we compute adversarial accuracy in Appendix E and point to it in the corresponding paragraph in the evaluation. We have now also added a short explanation of why we choose an adversarially trained core network.\"}",
"{\"title\": \"Response to second comment of Reviewer3\", \"comment\": \"We thank the reviewer for their quick response, allowing us to address the remaining points as well. We have again summarized the main points and answer them below:\", \"q\": \"Can you improve the clarity of Figure 2 as it currently looks quite crowded?\\n\\n-> Yes, to improve the clarity we split Figure 2 in the new Figure 2 and Figure 7, removed the Conv2 results, refer to the very dark grey as black from now on and added a note regarding what to focus on to the caption.\\n\\n[1] Balunovic, Mislav, and Martin Vechev. \\\"Adversarial training and provable defenses: Bridging the gap.\\\" International Conference on Learning Representations. 2019.\\n[2] Zhang, Huan, et al. \\\"Towards stable and efficient training of verifiably robust neural networks.\\\" arXiv preprint arXiv:1906.06316 (2019).\\n[3] Xu, Kaidi, et al. \\\"Automatic Perturbation Analysis on General Computational Graphs.\\\" arXiv preprint arXiv:2002.12920 (2020).\"}",
"{\"title\": \"Rearranged presentation of results\", \"comment\": \"We thank the reviewer for their suggestions and have rearranged our results to improve the presentation. We have again summarized the main points and answer them below:\", \"q\": \"Is the intent of Table 1 to simply present results on a range of datasets and perturbation sizes?\\n\\n-> Yes, the intent of Table 1 was to show that ACE can achieve consistent results over a range of datasets, architectures and perturbation sizes. We have reduced its length significantly, moving a full copy to Appendix F.\"}",
"{\"title\": \"Resolving the inconsistency between abstract and Introduction\", \"comment\": \"We thank the reviewer for this observation. We have resolved the inconsistency as described in the reply above.\"}",
"{\"title\": \"Comparison with state-of-the-art in the abstract and introduction\", \"comment\": \"We thank the reviewer for their quick response, allowing us to address the remaining points as well. We have again summarized the main points and answer them below:\", \"q\": \"Why don\\u2019t you compare your results to \\u201cConv3, COLT_MILP\\u201d which achieves a much higher certified robustness in the abstract?\\n\\n-> We originally did not reference any results computed with MILP as we did not want to focus on numbers where the very high computational requirements (up to an hour per sample) make the computation of a tradeoff curve for multiple reference models infeasible. This is also the reason why we did not report the MILP certified numbers of the COLT ACE model in the abstract and introduction. We decided to remove the reference numbers altogether, as we can not provide sufficient context to put them into the right perspective. We note that while it might very well be possible to train a larger COLT_MILP model the certification of even a Conv3 model already takes well above a week for the full test set.\"}",
"{\"title\": \"Response to Author Response, Part I\", \"comment\": \"## Standardization of Naming\\nThanks for working to standardize the naming throughout the paper. It looks like there are still a few stray references to DM-Large after it is first introduced in the \\\"Models and Datasets\\\" section of Experimental Evaluation. (See for example the last paragraph on Page 6, the caption of Figure 2, the legend of Figure 2, and the footnotes in Table 1).\\n\\nAlso, I'd encourage the authors not to reference the term ConvMedBig, since it's never mentioned in the original paper. Instead, this work could point to the paragraph in https://openreview.net/pdf?id=SJxSDxrKDr where the relevant architecture is described.\"}",
"{\"title\": \"Response to Author Response, Part II\", \"comment\": \"## Computing Adversarial Accuracy and Gradient Masking Concerns\\nThanks for explaining this. Did you include this explanation in the current version of the paper?\"}",
"{\"title\": \"Agree that presentation of paper could be improved\", \"comment\": \"I'd like to echo Reviewer 3's sentiment that the presentation of the paper could be improved. In particular, for me, it's still hard to figure out what the takeaway from Figure 2 and Table 1 is - even after reading the surrounding text.\\n\\n### Figure 2\\n\\nHere are some possible takeaways from Figure 2 as it currently stands. Which of these should readers focus on?\\n\\n- For a range of different _types_ of core networks (colored triangles), ACE SelectionNet is able to provide a good certifiable accuracy - natural accuracy tradeoff (colored squares)\\n- ACE SelectionNet provides a better certifiable accuracy - natural accuracy tradeoff than CROWN-IBP (at a particular value of certifiable accuracy or natural accuracy, the yellow gradient is steeper than the other colors)\\n- ACE Entropy has better performance than the ACE SelectionNet (comparing the black diamond and the black square)\\n\\nAs Reviewer 3 mentions, showing just one setting of COLT along with the associated selection net would suffice here if the intention is to compare the certifiable accuracy - natural accuracy tradeoff. In any case, it would be helpful to have a concise guide to the Figure in the caption.\\n\\n### Table 1\\n\\nI'm not sure what I should be taking away from this table. \\n\\n- Is it simply meant to present results for a range of datasets and $\\\\epsilon$ values? (If so, one result per dataset / $\\\\epsilon$ pair should suffice).\"}",
"{\"title\": \"Network selected for comparison in the abstract\", \"comment\": \"In the abstract, the paper states that\\n\\n> we are the first to obtain a high natural accuracy (90.1%) with non-trivial certified robustness (27.5%). Notably, prior state-of-the-art methods incur a substantial drop in accuracy (79.5%) for a similar certified robustness (24.6%)\\n\\n(It appears that the comparison is being made with the \\\"DM-Large, CROWN IBP\\\" network (yellow) that is third from the right.)\\n\\nThis gives the impression that ACE produces networks with higher natural accuracy AND certified robustness than the state of the art, while in fact ACE explicitly trades off some certified robustness for a higher natural accuracy. \\n\\nIt seems like the fairer comparison here would be with the \\\"Conv3, COLT & MILP\\\" network. This network has only a slightly higher drop in accuracy (to `~78%`) but with a far higher certified robustness (`~60%`). (In fact, I expect that it would be possible to train a slightly larger \\\"COLT & MILP\\\" network that has a natural accuracy greater than the 79.5% value above).\"}",
"{\"title\": \"Apparent Inconsistenct between Abstract and Introduction\", \"comment\": \"With the updated results on CROWN-IBP, it looks like the abstract was updated but the introduction was not. Quoting the paper as at the time of this comment:\\n\\n### Abstract\\n\\n> For instance, on CIFAR-10 with an `$l_\\u221e$ perturbation of 2/255, we are the first to obtain a high natural accuracy (90.1%) with non-trivial certified robustness (27.5%). Notably, prior state-of-the-art methods incur a substantial drop in accuracy (79.5%) for a similar certified robustness (24.6%).\\n\\n### Introduction, third paragraph\\n\\n> For example, on the challenging CIFAR10 dataset with an `\\u221e perturbation of 2/255, we obtain 91.6% natural accuracy and a certified robustness of 22.8%. On the same task, prior approaches cannot obtain the same natural accuracies\\nfor any non-trivial certified robustness and, when tuned to comparable levels of certified robustness (16.5%), only obtain a natural accuracy of 77.4%\"}",
"{\"title\": \"Rating increased based on the new results however there are still some issues\", \"comment\": \"Thank you for providing the additional results. These results are very helpful and essential for this paper.\\n\\nI am mostly convinced that the proposed method can perform better then directly\\ntuning a weight on natural training loss (the \\\"kappa\\\" parameter) in existing\\nworks. From Figure 5, 7 and 8 in Appendix it seems the author's claim is\\nsupported. So I am increasing my rating by 1.\\n\\nThere are still several presentation issues in this paper however, so I cannot\\ngive a firm accept for this paper. From Figure 2 and Table 1 in the main text,\\nif I don't read the new results in appendix very carefully, I am still not\\nconvinced that the proposed approach is better than directly weighting the loss\\nfunction. The main concern is that, the issue in Figure 2 is still unresolved\", \"in_the_updated_revision\": \"for COLT there is no comparison against simple loss\\nfunction weighting (I understand the original COLT paper does not use a natural\\nloss, but it can be easily added and tuned for a trade-off for clean accuracy),\\nand for CROWN-IBP there is no ACE variants. Also Table 1 is not really helpful\\nhere since from this table only one data point for each setting is shown and it\\nis impossible to show the trade-off. I would suggest using Figure 7, Figure 8\\nto present the results.\\n\\nAdditionally, I am also confused why the authors use an adversarially trained\\nnetwork as the core classification network (same question asked by\\nAnonReviewer4). It seems using adversarial trained model provides no benefits\\nfor the metrics reported in the paper. The answer for AnonReviewer4 is not\\nvery convincing to me. Is the real reason somewhat related to the adversary\\naccuracy in Table 1?\\n\\nOver all the paper looks little bit rushed and can be confusing for people not\\nvery familiar with related papers. For example:\\n\\n1. In Figure 7, there are several triangles not presented in the legend. Also the green, red lines are CROWN-IBP not IBP (lengend label is wrong). \\n\\n2. Table 1 also has a similar issue, it seems both CROWN-IBP and (Xu et al.) are referred to as \\\"IBP\\\" under the \\\"Provable training\\\" column, which is inaccurate.\\n\\n3. Some discussions on the newly added training method (Xu et al.) for TinyImageNet should be added in section 3. Is their training method very different from COLT or CROWN-IBP (any reason why COLT and CROWN-IBP do not work on TinyImageNet)?\\n\\n4. In Figure 2 the \\\"Conv2, COLT\\\" marks look like black for me, not gray (as mentioned in text). Also I think there are too many lines on this figure; I think just showing one setting of COLT is sufficient here (e.g., Conv 3 COLT).\\n\\nThanks again for the response and I hope my suggestions above can help the authors further improve their paper.\"}",
"{\"title\": \"Response to Reviewer4 Part II\", \"comment\": \"Q: What exactly is meant by the statement that COLT is unsuitable for the accuracy-robustness trade-offs that are target?\\n\\n-> We train the largest network to which COLT scales (we use the largest network from [1]) using standard training and observe that we are unable to achieve a natural accuracy comparable to what ACE achieves while providing robustness guarantees. Therefore, we conclude that an individual COLT trained network lacks the capacity to achieve high natural (and additionally certifiable) accuracies. The \\u201cCOLT\\u201d entry in Table 1 refers to the method we use to train the provable networks of the ACE architecture (selection- and certification-network) in contrast to using it for an individual provable network. We changed the corresponding sections and hope the new formulation is more clear.\", \"q\": \"Can you give a more detailed description of the certification methods you use?\\n\\n-> Yes, we use IBP [2] and DeepZ [3] for interval and zonotope certification of IBP and COLT trained networks, respectively. Where explicitly stated, we use MILP certification [5] with some optimizations from [1]. We focus on how we certify the compositional architecture using arbitrary methods instead of how these methods work for individual networks to emphasize the orthogonality of our approach to individual certification methods. We now added additional information in the appropriate section, pointing the reader to the original sources for these methods.\", \"in_short\": \"We considered the problem of gradient masking and use the following approach to avoid overly optimistic results. We try to prove that the core- and certification-network can not be reached for a given sample. If this fails, we compute an adversarial attack against the corresponding network in isolation. We are aware that this can lead to samples that would be actually classified correctly by the compositional network, as that specific perturbation might not get selected for classification by the network it successfully attacked.\\nWe also considered an approach not suffering from this problem (described in Appendix E) but chose to report the more conservative numbers.\", \"additional_feedback\": \"We corrected the typo and homogenized the notation.\\n\\u201cStd\\u201d is indeed short for standard deviation. We adapted the caption to make this more clear.\\nWe moved Figure 2 to the middle of the corresponding section and hope to have made its intentions more clear in the text preceding it.\\n\\n\\n\\n[1] Balunovic, Mislav, and Martin Vechev. \\\"Adversarial training and provable defenses: Bridging the gap.\\\" International Conference on Learning Representations. 2019.\\n\\n[2] Gowal, Sven, et al. \\\"On the effectiveness of interval bound propagation for training verifiably robust models.\\\" arXiv preprint arXiv:1810.12715 (2018).\\n\\n[3] Singh, Gagandeep, et al. \\\"Fast and effective robustness certification.\\\" Advances in Neural Information Processing Systems 31 (2018): 10802-10813.\\n\\n[5] Tjeng, Vincent, Kai Xiao, and Russ Tedrake. \\\"Evaluating robustness of neural networks with mixed integer programming.\\\" arXiv preprint arXiv:1711.07356 (2017).\"}",
"{\"title\": \"Response to Reviewer4 Part I\", \"comment\": \"We thank the reviewer for their insightful and detailed questions and comments. We did in fact identify some of the same questions but could only conduct the corresponding experiments after the initial submission deadline.\", \"we_rephrased_your_comments_and_hope_to_answer_all_points_below\": \"\", \"q\": \"Where do you specify how the performance of the state-of-the-art-methods, mentioned in abstract, is obtained? Are they what you measured for the CROWN-IBP trained Conv5 networks?\\n\\n\\n-> Indeed these numbers were taken from the Conv5 networks we trained with CROWN-IBP.\\nWe updated the numbers to correspond to the more expensive training schedule discussed above and updated our evaluation section to note where we take these numbers from. We chose the Conv5 setpoint, by first selecting a setpoint for ACE in its intended working range and then selecting the Conv5 setpoint with the largest smaller certified accuracy, to give it the best chance of beating our natural accuracy. We agree that individual setpoints are not ideal for illustrating the trade-off character, therefore we illustrate the trade-off between certified and standard accuracy in Figure 2 (and now also Figure 3, 4, and 7) (formerly 5, 7, and 8)).\", \"edit\": \"We rearranged some of our figures for a second revised version and have changed the figure numbers in this response accordingly.\\n\\n[1] Balunovic, Mislav, and Martin Vechev. \\\"Adversarial training and provable defenses: Bridging the gap.\\\" International Conference on Learning Representations. 2019.\\n\\n[3] Singh, Gagandeep, et al. \\\"Fast and effective robustness certification.\\\" Advances in Neural Information Processing Systems 31 (2018): 10802-10813.\\n\\n[4] Zhang, Huan, et al. \\\"Towards stable and efficient training of verifiably robust neural networks.\\\" arXiv preprint arXiv:1906.06316 (2019).\\n\\n[5] Tjeng, Vincent, Kai Xiao, and Russ Tedrake. \\\"Evaluating robustness of neural networks with mixed integer programming.\\\" arXiv preprint arXiv:1711.07356 (2017).\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"We thank the reviewer for their interesting questions and comments.\", \"we_rephrased_your_comments_and_hope_to_answer_all_points_below\": \"\", \"q\": \"Can you provide some interpretation of the learned selection mechanism , in particular, what features of the samples are key for selection?\\n\\n-> Yes, we can provide some interpretation of how the selection mechanism decides. We observe that if a certification network has difficulty differentiating a group of classes in an adversarial setting (blocks of high off-diagonal terms in the confusion matrix), while they are easy to differentiate from other classes, then classes from this group are selected at a much lower rate for certification. An example for such a group, are the animal classes in CIFAR-10. Whether this effect is due to the selection network learning underlying features that make these samples more difficult to classify provably correct, learning that all of these classes are difficult to certify, or most likely a combination of the two, we can not say with certainty.\\nBecause the selection-network was not set up specifically to be an interpretable model (which generally incurs accuracy penalties), it is difficult to pinpoint individual features learned by the selection mechanism. \\nWe added an experiment comparing three selection networks on an otherwise identical ACE model. We transferred one selection network from a different Conv3 ACE model, trained on using labels based on the adversarial correctness of the sample and trained one in the standard way using provable correctness. We observe that the transferred selection network performs very well, while the one trained using adversarial accuracy performs notably worse. This suggests to us that a) the certification difficulty of a sample is stable over different certification networks at least to some degree and b) that the selection network does learn features distinguishing the difficulty of finding an adversarial example from provable robustness. We present these results in more detail in Appendix D and hope this provides some intuition on how the selection decision is made.\\n\\n[1] Zhang, Hongyang, et al. \\\"Theoretically principled trade-off between robustness and accuracy.\\\" arXiv preprint arXiv:1901.08573 (2019).\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"We thank the reviewer for their insightful and detailed questions and comments. We did in fact identify some of the same questions but could only conduct the corresponding experiments after the initial submission deadline.\", \"we_rephrased_your_comments_and_hope_to_answer_all_points_below\": \"\", \"q\": \"Is it misleading to compare an ACE model based on COLT with individual models using a cheaper certified training method such as CROWN-IBP?\\n\\n-> No, we believe it to be a feature of the ACE architecture that small models with high certified accuracies, trained and certified with expensive provable training and certification methods, can be combined with larger core-networks to obtain ACE models that achieve high natural accuracies, typically not accessible to networks of a size to which these expensive methods scale. We believe a comparison with more scalable provable training methods to be appropriate if these are required to scale to the larger networks required to achieve comparable natural accuracies. \\nHowever, we want to point out that the certification networks we used for the comparison in Figure 2 are actually worse than the CROWN-IBP networks and that it requires less than half the GPU time to train and certify these models. Using more expensive certification methods such as MILP, which don\\u2019t scale to Conv5, we can improve the performance of the ACE model further.\", \"edit\": \"We rearranged some of our figures for a second revised version and have changed the figure numbers in this response accordingly.\\n\\n\\n[1] Zhang, Huan, et al. \\\"Towards stable and efficient training of verifiably robust neural networks.\\\" arXiv preprint arXiv:1906.06316 (2019).\\n\\n[2] Xu, Kaidi, et al. \\\"Automatic Perturbation Analysis on General Computational Graphs.\\\" arXiv preprint arXiv:2002.12920 (2020).\\n\\n[3] Balunovic, Mislav, and Martin Vechev. \\\"Adversarial training and provable defenses: Bridging the gap.\\\" International Conference on Learning Representations. 2019.\"}",
"{\"title\": \"General Answers\", \"comment\": \"We thank reviewers for their feedback and comments. We first list some general updates we have made to the submission, then proceed to answer common questions in the reviews. Finally, we respond to questions of each individual reviewer.\", \"general_updates\": \"We updated our IBP certification implementation to obtain tighter bounds on the deltas between output activations, improving the certified accuracies for our IBP certified networks. This does not influence the baseline networks, as we did not use our implementation for their certification.\\nAs the architectures Conv3 and Conv5 are identical to the 3 convolutional layer architecture from [2] and DM-Large from [1] we have changed the notation to the unified ConvX standard.\", \"common_questions\": \"\", \"q\": \"Can you provide additional evidence to substantiate the claim that ACE produces more favourable accuracy-robustness tradeoff than current state-of-the-art methods?\\n\\n\\n-> Yes, we agree that Table 1 is not an ideal way to present results on the tradeoff and have added Figures 3, 4, and 7 (formerly 5, 7 and 8) in the style of Figure 2 for CIFAR-10 at 2/255, 8/255 and TinyImageNet at 1/255, respectively. For CIFAR-10 at both perturbation levels we trained multiple CROWN-IBP networks at different tradeoffs as a reference. For TinyImageNet this was unfortunately not feasible in the given timeframe. For CIFAR-10 at 2/255 we use the Conv3 directly from [2] as a certification-network for our ACE model and use MILP [4] to certify the certification-network. We compare with models trained using COLT with different natural loss components, but can only certify them using DeepZ [3], as MILP certification would take in excess of a week per setpoint. Therefore, we also evaluate our ACE models using DeepZ and report results in Figure 2 (formerly 5). For CIFAR10 at 8/255 we use one Conv5 model directly from [1] and one trained ourselves at a higher natural loss component as certification-networks for ACE models. Additionally we add weaker Conv3 models to the comparison and show that for all ACE models there are setpoints where they outperform the individual Conv5 models. This is illustrated in Figure 3 (formerly 7).\", \"edit\": \"We rearranged some of our figures for a second revised version and have changed the figure numbers in this response accordingly.\\n\\n[1] Zhang, Huan, et al. \\\"Towards stable and efficient training of verifiably robust neural networks.\\\" arXiv preprint arXiv:1906.06316 (2019).\\n\\n[2] Balunovic, Mislav, and Martin Vechev. \\\"Adversarial training and provable defenses: Bridging the gap.\\\" International Conference on Learning Representations. 2019.\\n\\n[3] Singh, Gagandeep, et al. \\\"Fast and effective robustness certification.\\\" Advances in Neural Information Processing Systems 31 (2018): 10802-10813.\\n\\n[4] Tjeng, Vincent, Kai Xiao, and Russ Tedrake. \\\"Evaluating robustness of neural networks with mixed integer programming.\\\" arXiv preprint arXiv:1711.07356 (2017).\"}",
"{\"title\": \"Review for Paper3751\", \"review\": \"This paper focuses on improving the standard (clean) accuracy for certifiably\\nrobust models. To achieve good certified accuracy, previous works typically\\nmake the standard accuracy much worse than naturally trained models. The\\nauthors propose a selection mechanism to choose between a certified model with\\nlow clean accuracy and a naturally trained model with high clean accuracy. At\\na high level, when the certified model cannot certify, there is no point to use\\nit for classification. A naturally trained model (which cannot be certified as\\nwell) is selected to improve standard accuracy.\", \"strengths\": \"Most previous works on certified defense focus on improving certified accuracy,\\nand standard accuracy is usually sacrificed. This paper focuses on a different\\nand important setting where high standard accuracy is desired, which is\\nneglected by many previous works. I think this is a good step.\\n\\nThe proposed selection scheme can balance a certifiably robust model with a\\nnaturally trained but highly accurate model. Such a combination can be helpful\\nin the settings where high prediction accuracy is required.\\n\\nThe proposed method is technically sound. Using a certified selector makes the\\nwhole network certified when it chooses the certified network. To improve clean\\naccuracy, The core network is used when the certified selector chooses the core\\nnetwork (i.e., the selector believes the certified network cannot make a good\\nprediction on this example) or cannot certify.\\n\\nThe paper overall is well motivated and organized.\", \"issues_and_questions\": \"At a high level, this certification scheme does not improve certified accuracy\\n(it only makes it worse); it only helps with the verified accuracy vs. clean\\naccuracy trade-off. Thus, a crucial part of evaluation is to show the verified\\naccuracy vs. clean accuracy tradeoff. However, it is not well demonstrated in\\nthe experiments. Especially, I think results Table 1 are not so useful because\\nwe can't see how the baseline certified defense models perform and cannot see\\nthis tradeoff. Also, the certified accuracy numbers are really low compared to\\nother works, and sometimes close to 0 (e.g., on ImageNet-200 only 3% accuracy).\\nThus, it is important to show a tradeoff figure here.\\n\\nI recommend using figures similar to Figure 2 to present the results for all\\nsettings (CIFAR 2/255 and 8/255; downscaled ImageNet-200 at 1/255) (but be\\naware Figure 2 has its own issues, see comments below). Importantly, we should\\nfix a well known certified model (e.g., COLT or CROWN-IBP) and then, apply ACE\\nwith different thresholds to see how the clean accuracy improves with dropped\\nverified accuracy. For CIFAR, COLT or CROWN-IBP pretrained models can be used\\nas the base certified model. For Imagenet-200, I found a recent work [1]\\npresented certified defense models on 64*64 TinyImageNet and ImageNet datasets\\nwhich can be helpful. They reported around 15% certified accuracy and also uses\\nmuch larger model structures which should improve the results in this paper by\\nusing their pretrained models as the base certified model for selection (I\\ndoubt the simple CNN models in this paper are sufficient for ImageNet). Again,\\nthe trade-off part is the most important results to see in this paper, which is\\nnot well demonstrated.\\n\\nFigure 2 made a misleading comparison because the ACE based methods are using\\nCOLT as the base certified classifier and it is inappropriate to compare it to\\nCROWN-IBP with different kappas. We should either also use COLT trained with\\ndifferent weights on natural loss (similar to the kappa in CROWN-IBP) to see\\nthis tradeoff, or use CROWN-IBP as the base certified classifier in this\\nfigure. Especially, in the CIFAR 2/255 setting, COLT achieves better clean\\naccuracy than CROWN-IBP, so this gives ACE an advantage in this comparison, and\\nthe claim that ACE achieves a better trade-off than using the tuned kappa\\nparameters in (CROWN-)IBP training cannot be justified.\\n\\nIt also seems to me that in Figure 2 the CIFAR 2/255 CROWN-IBP numbers are much\\nworse than the ones reported in CROWN-IBP paper (they reported 28.48% standard\\nerror and 46.03% verified error), but in Figure 2 it is much worse (~35%\\nstandard error and ~50% verified error). If we use the correct CROWN-IBP model,\\nit should start at a similar place at the ACE based methods in Figure 2, rather\\nthan on the far left. Can you explain?\", \"conclusion\": \"I like the aim on standard accuracy and the network selection idea proposed in\\nthis paper, but its current evaluation is partially missing or misleading and\\ncannot justify all claims. So I cannot recommend accepting its current version.\\nHowever I will be glad to discuss with the authors and re-evaluate the paper\\nbased on new evaluation results from the authors. I will be happy to accept this\\npaper if the authors can address my issues mentioned above.\\n\\n---\\n### After rebuttal\\n\\nSee my reply below for my comments after rebuttal. Overall I feel the paper still has room for improvement and there are several open issues, but it has been improved so it is marginally above acceptance threshold now.\\n\\n---\", \"reference\": \"[1] Xu, Kaidi, et al. \\\"Provable, Scalable and Automatic Perturbation Analysis on General Computational Graphs\\\" https://arxiv.org/pdf/2002.12920\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Combining SOTA network with certified network using an adaptive selection mechanism\", \"review\": \"This paper proposes a new network architecture that combines 1) a state-of-the-art deep neural network with high accuracy (but potentially no robustness certificate), and 2) a small certification network with high certifiable robustness (but not necessarily very high accuracy), using a selection network that adaptively chooses between these two networks. They show that by doing so, the new architecture is able to take advantage of both networks and thus obtain good natural accuracy with better certified robustness that significantly improves upon prior benchmarks.\\n\\nThe main advantage of this framework is its flexibility in allowing arbitrary combinations of STOA deep networks with any networks with certified robustness and their selection mechanism is able to make good use of both.\\n\\n1. I like this simple idea and I am glad to see its good performance, although I wish the author can develop more theoretical results to quantify the value of a hybrid model.\\n\\n2. According to (2), the objective may not be differentiable because of the binary function $g$. Can you elaborate on how gradient-based algorithms are applied to this formulation?\\n\\n3. Can you provide some interpretation of the learned selection mechanism $g_{\\\\theta_s}$? In particular, what features of the samples make them be passed through the core or the certification network?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Novel idea with good results that would improved by better experimental evidence and clarity\", \"review\": \"# Summary of Contributions\\nThe paper presents an approach to trade off natural accuracy and certified robustness by combining a network with high natural accuracy (the \\u201ccore network\\u201d) with a second network with high certifiable robustness (the \\u201ccertification network\\u201d). A selection mechanism is used to decide which network an input sample should be processed by. The selection mechanism allows the combined system to perform significantly better than a weighted average of the core and certification networks (e.g. randomly assigning input to the core network with some probability $p$) would. \\n\\n# Score Recommendation\\nDespite the weaknesses in experimental evidence, clarity and reproducibility identified below, I recommend an acceptance because the authors have demonstrated that the selection mechanism presented works for non-trivial problems, providing a simple way to trade off natural accuracy and certified robustness. \\n\\nACE can consistently benefit from advances that improve the natural accuracy of the core network. In addition, as long as a selection mechanism can be found that is compatible with the certification network, it would be possible for ACE to leverage improvements in certified defenses.\\n\\nWhile results are only presented for $l_\\\\infty$ perturbations, I expect that the same approach can be applied to different perturbations, as long as it is possible for the selection mechanism to have a tunable selection rate while having non-trivial robustness to the perturbation of choice.\\n\\n# Weaknesses\\n## Missing Experimental Evidence\\n- The paper claims in the abstract that it is the first to obtain a high natural accuracy with non-trivial certified robustness; the results (91.6% natural accuracy, 22.8% certified robustness) are compared to the prior state-of-the art (77.4% natural accuracy, 16.5% certified robustness). However, I am concerned that the comparison may not be completely fair (if the network the comparison is made to is one of the CROWN-IBP trained DM-Large networks; see the section on \\u201cClarity\\u201d below). The paper itself acknowledges (in the first paragraph of page 7) that they did not conduct the extensive training required to obtain the performance reported in the CROWN-IBP paper.\\n - The authors should either train the CROWN-IBP DM-Large network with at least as much resources as in the original paper, or clearly identify in both the abstract and introduction that the amount of compute used was constrained.\\n\\n- The paper claims that \\u201cACE produces much more favorable robustness-accuracy trade-offs than varying hyperparameters of the existing certified defenses\\u201d on the basis of comparing DM-Large CROWN-IBP to the Conv2 COLT-based ACE SelectionNet. I believe that more evidence is required to substantiate this claim, since the choice of a base network is rather arbitrary (why choose one comparable to the second DM-Large network, not the first or the third?).\\n - One possible set of experiments is to use each of the 5 DM-Large networks with non-trivial certifiable accuracy as the certification network, and then showing that the resulting families of ACE SelectionNets have a better robustness-accuracy tradeoff.\\n\\n## Possible Experimental Errors\\nThe Conv3-COLT network in Figure 2 has a performance (~75% natural accuracy, ~50% certifiable accuracy) that is significantly worse than that reported in the original COLT paper (78.4% natural accuracy, 60.5% certified robustness). What is the cause of this significant gap?\\n\\n## Clarity\\n- The term \\u201cConvMedBig\\u201d is used in the caption for Figure 2 and elsewhere in the paper, but is not defined in the original paper. (It appears that the authors may be referencing [the name in code](https://github.com/eth-sri/colt/blob/20f30b073558ae80e5e726515998c1f31d48b6c6/code/networks.py#L79)). The authors should provide more detail about specifically which network this is. In fact, it appears from the third paragraph of Section 7 (and the code linked above) that \\u201cConvMedBig\\u201d matches the network Conv3 exactly. If this is the case, the same name should be used.\\n\\n- The authors compare to a prior approach with 77.4% accuracy and 16.5% certified robustness but do not specify what this approach is. (It appears from Figure 2 that this may be one of the DM-Large CROWN-IBP networks)\\n\\n- At the bottom of page 6, the paper states that \\u201cthe smaller networks to which COLT scales lack capacity to obtain the kind of robustness-accuracy trade-off that we target\\u201d. What does this mean? A significant proportion of the results in Table 1 are presented for COLT, so I\\u2019m confused by this statement.\\n\\n## Reproducibility\\n- Hyperparameters for the PGD attacks used are not provided, making it difficult to understand the strength of the adversarial attack being used. (If the adversarial attack is weak, the adversarial accuracy presented in Table 1 may be significantly higher than the actual robust model accuracy).\\n- More details should be provided about the algorithm used for certifying the networks in Table 1 (other than the Entropy-COLT-Conv2 network). The third paragraph of Section 5 states that \\u201cwe only use \\u2026 convex relaxation-based certification methods based on intervals and zonotopes\\u201d, but I couldn\\u2019t find any further details (for example, what zonotopes were used to verify the certification network?)\\n\\n# Questions for Authors\\n- I\\u2019d like to better understand how the adversarial accuracy of the network was evaluated; the paper only mentions that it is \\u201cusually computed using an adversarial attack such as PGD\\u201d (see the first paragraph of Page 3). One of my concerns is that the selection mechanism (particularly where a selection network is used) may reduce the success of PGD adversarial attacks without increasing the robustness of the network, possibly via gradient masking [1].\\n- In paragraph 2 of Section 5, the paper specifies that an adversarially trained network was used as the core network. Given that the last paragraph of Section 4 states that \\u201cwe assume that certification of the core-network always fails\\u201d, why did you choose an adversarially trained network (which presumably has slightly worse natural accuracy?)\\n\\n[1]: Papernot, Nicolas, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, and Ananthram Swami. \\\"Practical black-box attacks against machine learning.\\\" In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp. 506-519. 2017.\\n\\n\\n# Additional Feedback\\n- hyperparamters \\u2192 hyperparameters (4th line, third paragraph of section 5)\\n- Figure 2 presents many different networks but it is not clear what the point of the figure is. Compounding the issue is the fact that the corresponding discussion begins almost an entire page later. For improved clarity, the authors should consider adding more to the caption for Figure 2 or moving it closer to the discussion.\\n- The term \\u201cnatural accuracy\\u201d and \\u201cstandard accuracy\\u201d is used interchangeably; the paper should settle on one.\\n- Figure 7 labels the y-axis \\u201cstd of input zono errors\\u201d but this term is never introduced anywhere else in the text. (Is this the standard deviation, perhaps?)\\n\\n# Post-Rebuttal Comments\\nI've maintained my score at 6.\\n\\nDuring the comment period, the authors made progress in improving the clarity of their presentation. As with reviewer 3, I feel that there is still room for improvement; in particular, moving some experiments in Section 5 to the appendix could make for a more focused paper with a clearer message for the reader. (Unfortunately, we did not have enough time during the comment period to get there). \\n\\nI'd also note that the paper is now at nine pages; this means that I am holding it to a higher bar.\\n\\nOverall, however, I continue to recommend an acceptance as the method to trade off natural accuracy and certified robustness is simple and significantly improves on the state of the art; for me, these strengths outweight the remaining issues.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
9D_Ovq4Mgho | Network-Agnostic Knowledge Transfer for Medical Image Segmentation | [
"Shuhang Wang",
"Eugene Cheah",
"Elham Yousef Kalafi",
"Mercy Asiedu",
"Alex Benjamin",
"Vivek Kumar Singh",
"Ge Zhang",
"Viksit Kumar",
"Anthony Edward Samir"
] | Conventional transfer learning leverages weights of pre-trained networks, but mandates the need for similar neural architectures. Alternatively, knowledge distillation can transfer knowledge between heterogeneous networks but often requires access to the original training data or additional generative networks. Knowledge transfer between networks can be improved by being agnostic to the choice of network architecture and reducing the dependence on original training data. We propose a knowledge transfer approach from a teacher to a student network wherein we train the student on an independent transferal dataset, whose annotations are generated by the teacher. Experiments were conducted on five state-of-the-art networks for semantic segmentation and seven datasets across three imaging modalities. We studied knowledge transfer from a single teacher, combination of knowledge transfer and fine-tuning, and knowledge transfer from multiple teachers. The student model with a single teacher achieved similar performance as the teacher; and the student model with multiple teachers achieved better performance than the teachers. The salient features of our algorithm include: 1) no need for original training data or generative networks, 2) knowledge transfer between different architectures, 3) ease of implementation for downstream tasks by using the downstream task dataset as the transferal dataset, 4) knowledge transfer of an ensemble of models, trained independently, into one student model. Extensive experiments demonstrate that the proposed algorithm is effective for knowledge transfer and easily tunable. | [
"Knowledge Transfer",
"Deep Learning",
"Medical Image Segmentation",
"Pseudo Annotation"
] | Reject | https://openreview.net/pdf?id=9D_Ovq4Mgho | https://openreview.net/forum?id=9D_Ovq4Mgho | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"cD1-E7ae9bo",
"utX7IyFFyPm",
"KfIiRaQ47O",
"UM87V1rIBAW",
"UD987Px9p7X",
"E--4oo3D6V2",
"Lyihgg6RbzD",
"7FwoSYnPMWA",
"kyjYw8PPnAm",
"Lf6stxW5EUN",
"-6rkWfXRaw5"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040420579,
1606104118949,
1606102885486,
1606102851913,
1606102756925,
1606102453975,
1606091360781,
1606087828921,
1603993979649,
1603847297910,
1602759618251
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3748/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3748/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3748/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3748/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3748/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3748/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3748/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3748/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3748/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3748/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"A majority of the reviewers find the paper lacks novelty and provides an insufficient discussion of the state-of-the-art in knowledge distillation and student teacher training to warrant publication.\\nThe approach is quite narrow to the application domain and the paper does not provide novel insights on how to chose a good network.\\nA subset of the experiments performed on an internal data set with random train-test-splits do not evaluate a realistic transfer setting.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"~~~\\nReviewer's comment: The paper describes a knowledge transfer technique based on training a student network using annotation creating by a teacher network. This is actually not a summary of the method but the method itself. Most the the rest of the paper is devote do describe experiment details. \\n~~~\\nWe sincerely thank the reviewer for taking the time to review our paper.\\nWe have revised the paper extensively. Meanwhile, we also highlighted our major contribution as well as the novelty of this article. \\n\\nBy the way, the reviewer describes the main idea of the algorithm correctly. Please find our detailed explanation below. \\n\\n~~~\\nReviewer's comment: The idea is well known in machine learning community see e.g. Distilling the Knowledge in a Neural Network by Hinton. where is used to transfer knowledge from a huge network to a small network. Hence, there is not much novelty in the paper. \\n~~~\\n\\nWe surveyed the work on knowledge distillation (Hinton et al., 2015) and some related studies (Yoo et al., 2019; Lopes et al., 2017). The novelty of our algorithm can be seen at least from four aspects. \\n\\n- **The main novelty is that our algorithm does not require any teacher-training data, metadata, or additional generative network**. Previous studies trained student models heavily relying on the training dataset of the teacher model (Hinton et al., 2015) , metadata (Lopes et al., 2017), or additional generative networks (Yoo et al., 2019). As pointed out by Yoo et al. (2019) and Lopes et al. (2017), Hinton\\u2019s knowledge distillation (Hinton et al., 2015) needed to access the teacher training dataset (labeled or unlabeled). Lopes et al. (2017) required producing metadata during training and the student model had to be trained on metadata instead. Yoo et al. (2019) employed an additional generative network to generate artificial dataset for training the student model, so that the performance of the generative network also affected the training of the student network. The generative network was coupled with the teacher network, so that it is necessary to design and train a generative network for each teacher model, which will increase the computation and challenge. It would be even more challenging if there are multiple teacher models. Since the teacher-training dataset, metadata, and generative network was coupled with the teacher network, they limited the application of these algorithms or increased the computation burden to train the student network. \\n\\n- Our algorithm transfers knowledge between heterogeneous networks of semantic segmentation, while these well-known knowledge distillation studies focused on classification. \\n\\n- Our transferal dataset (used to train the student model) is allowed to be much different from the teacher training dataset, even of different image modalities. For example, we used PASCAL VOC2020 as transferal dataset to transfer the ultrasound segmentation knowledge, and it worked well. \\n\\n- Last but not least, our algorithm is really simple and easy to implement. \\n\\nWe also highlighted the salient features of our algorithm in **Abstract**-- \\u201cThe salient features of our algorithm include: 1) no need for original training data or generative networks, 2) knowledge transfer between different architectures, 3) ease of implementation for downstream tasks by using the downstream task dataset as the transferal dataset, 4) knowledge transfer of an ensemble of models, trained independently, into one student model.\\u201d \\n\\n~~~\\nReviewer's comment: Although the method is very simple it is difficult the follow the experimental results. It is written in a very unclear way. Do you use step 3 in the experiments? \\n~~~\\nTo make it easy to follow, we carefully revised the paper, especially the experiment section. For the convenience of the reviewer, we would like to make a brief summary of our study: \\n- Our main algorithm is on knowledge transfer from a teacher model to an student model that is independent in architecture (as the reviewer described in the first comment), which is Algorithm 1. \\n- If a small dataset with ground truth is provided, the knowledge transfer algorithm can be used together with fine-tuning (Algorithm 2). \\n- In the revision, we described the two algorithms with more details, separately . \\n\\nWe are looking forward to more comments and suggestions from the reviewer. We will try our best to improve the paper.\"}",
"{\"title\": \"Response to Reviewer 3 -- 4/4\", \"comment\": \"~~~\\nReviewer's comment: Data preprocessing: What preprocessing are you using for the training of the networks? How are you handling different shapes of the images? Are the segmentation algorithms trained on the full images or on patches? \\n~~~\\n\\nWe introduced data preprocessing in the appendix of the revision (**Section A.1**):\\n\\nIn addition to pre-processing, we employed five methods for data augmentation. \\n\\n- **Pre-processing:** All the images were resized to 384\\u00d7384 and the color images in Skin Lesion and PASCAL VOC2012 were converted into gray-scale images. \\n- **Random Cropping:** A percentage of consecutive rows and columns were cropped from the original image. The percentage was a random value in the range [70%, 100%] and [80%, 100%] for segmentation and classification, respectively. \\n- **Horizontal Flipping:** The image was reversed horizontally, that is, from left to right. \\n- **Random Rotation:** The image was rotated around its center. The rotation angle was a random value in the range [-45\\u25e6, 45\\u25e6] and [-30\\u25e6, 30\\u25e6] for segmentation and classification, respectively. \\n- **Gamma Adjustment:** Gamma correction was conducted on an image by the gamma value, which was randomly sampled in the range [0.7, 1.3]. \\n- **Contrast Adjustment:** The image contrast of an image was adjusted by the contrast factor, which was a random value in the range [0.7, 1.3].\\u201d \\n\\n~~~\\nReviewer's comment: Transferral datasets: Looking at the images in Fig. 2, it seems that even the same modality ultrasound images show different sorts of image artefacts - do you clean those at all? Do you think domain shift might be something that's interacting with your setup? \\n~~~\\n\\nWe did not remove or revise any images due to artifacts and we did not take domain shift into consideration. \\nWe are not sure if we understand this question correctly. Please let us know if you think our answer is confused. \\n\\n~~~\\nReviewer's comment: Do you have any theoretical or intuitive justification why you would want to perform knowledge transfer using unrelated data (skin lesion / different anatomy etc)? Why should this be better than using no data at all or regular computer vision datasets? Do you think the number of training examples for transfer matters? \\n~~~\\n\\nBy using unrelated data, we want to show that the transferal dataset is quite easy to build. \\nWe are not sure of what the reviewer mean by \\u201cusing no data\\u201d. We guess the reviewer refers to Yoo et al. (2019). Although Yoo et al. (2019) claimed they used no data, they needed an **additional** generative network to produce artificial training data. Actually, the generative network itself can be a challenge, as each teacher network needs to train the generative network specially and different tasks may need different generative neural architectures.\\n\\n~~~\\nReviewer's comment: What do example segmentations look like? Are there similar shapes for different datasets? Does the network also learn some sort of shape prior? (see Oktay, et al. Anatomically constrained neural networks (ACNNs): application to cardiac image enhancement and segmentation. 2017) \\n~~~\\n\\nNo. They do not have to be similar and they can be totally visually different. \\nIn the revision, we adopted a natural image dataset, PASCAL VOC2012, as the transferal dataset and it also worked well. \\n\\n~~~\\nReviewer's comment: Which loss did you use: CE / Dice-loss or a combination of the two? \\n~~~\\nWe used both CE and Dice loss with the same weight. Please refer to our revision in **Section A.3**--\\u201cTwo common loss functions were adopted with the same weight for the segmentation tasks\\u201d \\n\\nFinally, we greatly appreciate all the other detailed comments the reviewer provided. These comments are very helpful for us to polish the paper. All typos were corrected in the version.\"}",
"{\"title\": \"Response to Reviewer 3 -- 3/4\", \"comment\": \"~~~\\nReviewer's comment: The first example (Table 2) seems to be using an internal dataset that randomly has been split into train/val/test splits and therefore resembles no real transfer learning task and it seems unsuprising that the 'direct learning' approach yields the best results. The second example (Table 3) using the Baheya breast lesion dataset seems to tackle the problem of unsupervised domain adaptation rather than transfer learning: the teacher and target dataset both tackle breast lesion segmentation on ultrasound images. Here it could be that using the target dataset as transferral dataset might help to adjust batch statistics for potential normalisation layers to improve the performance. This leaves the last two examples as only real transfer learning experiments. \\n~~~\\n\\nOur main algorithm is on knowledge transfer (Algorithm 1). However, if a small dataset with ground truth of the downstream task is given, our knowledge transfer can be used together with fine-tuning (Algorithm 2). \\n\\nIn the revision, we removed experiments (direct learning) that might confuse the reviewer and lacked relevance. Table 2 and Table 3 aim to show that student models can transfer the knowledge of the teacher model by various transferal datasets without manual annotation. For example, Panoptic FPN was trained on pseudo-annotated PASCAL VOC2012* from scratch, and it could segment the XXX Breast Lesion-2 with an average Dice score of 81.28\\u00b118.97 (Table 2). Panoptic-FPN achieved a 94.72% Dice score of the teacher model. **Note that all student models were only trained on pseudo-annotated transferal datasets, and they had no access to the teacher-training dataset; the transferal dataset, such as PASCAL VOC2012, can be totally different from the test datasets (XXX Breast Lesion-2 and Baheya Breast Lesion).**\\n\\n It is true if the test dataset and the transferal dataset are of the same distribution, our algorithm can achieve domain adaptation without extra computation. This is actually one of the advantages of our algorithm, especially when the downstream task is known and the downstream task dataset is used as the transferal dataset. However, it doesn\\u2019t mean our algorithm is only data adaptation. As we can see from Table 3, if we used a dataset of different modality, PASCAL VOC2012*, as the transferal dataset, the student models continued to perform well. Table 4 and Table 5 show the results by combining knowledge transfer and fine-tuning. It is can be viewed as a kind of transfer learning, with the important point that the knowledge transfer occurs between heterogeneous networks without explicit access to large training datasets with ground truth labels. \\n\\n \\n\\n\\n\\n~~~\\nReviewer's comment: Lastly, the whole setup assumes that the input and output space of the student and teacher network are always the same, while it is argued that this approach allows for flexibility in difference of network architectures between the student and teacher network. However, semantic segmentation tasks in medical imaging often appear with various numbers of classes and 'input channels' requiring more advanced knowledge distillation -- this could be an interesting problem to tackle in a later version of this work. \\n~~~\\n\\nWe agree that it is one limitation of this work, that we only conducted experiments on networks that have the same input and output space. It would be interesting to further study the knowledge transfer between networks with different input data structure or different number of prediction classes. We have pointed it out in the discussion of the revision. Please refer to **Section 6** --\\u201cFurther study on knowledge transfer between neural networks with different input data structures or different number of prediction classes is also warranted.\\u201d\"}",
"{\"title\": \"Response to Reviewer 3 -- 2/4\", \"comment\": \"~~~\\nReviewer's comment: Further, the experiments do not contribute any new insights about how to choose the best student network nor which transferral dataset to use even though the introduction refers to unsuitability of certain pretraining tasks for a different target task. \\n~~~\\n\\n**Response about transferal datasets:**\\n- We revised the algorithm section with details about the image selection process (based on the pseudo mask). Please find our method for image selection in **Section 2**--\\u201cWe adopt two constraints to exclude images without salient targets, as shown in Figure 1. The first constraint is on the target pixel number in the pseudo mask, where target pixel indicates the pixel with a value above 0.5. We exclude the image (the resolution size is 384\\u00d7384) if the target pixel number is less than a threshold of 256. The second constraint is on the gray-level entropy of the pseudo mask. A higher value implies the target and the background have no obvious difference and vice versa. We exclude images with a gray-level entropy higher than a threshold of 2.5. We only employ the second constraint in the case that the teacher model outputs a soft mask rather than a binary mask.\\u201d \\n\\n- We conducted further experiments which demonstrated that most datasets, even those from different modalities, could work well for knowledge transfer. We discussed the selection of transferal datasets in **Section 6**--\\\"A transferal dataset with a large number of images with varied content has a higher possibility of capturing rich targets in the pseudo masks, but it may also include many images without salient targets. Future work includes optimizing a single transferal dataset or a combination of multiple transferal datasets to build a better aggregate.\\\"\\n\\n**Response about student models:**\\n- It is true that different models may have different abilities to learn from the pseudo-annotated transferal dataset, while it is similar to models learning on dataset with ground truth (e.g. manual annotations). Understanding the differences in student model learning on pseudo-annotated transferal dataset and manually annotated dataset can help generate confidence in the proposed algorithm. As a novel algorithm, there must be many avenues that can improve and extend the study. We leave the topic on how to build a student network to learn well on pseudo-annotated transferal dataset to our future study.\\n\\n- We condensed the discussion in **Section 6**--\\u201cDifferent models may have different abilities to learn from the pseudo-annotated transferal dataset. Understanding the differences in student model learning on pseudo-annotated transferal dataset and manually annotated dataset can help generate confidence in the proposed algorithm.\\u201d\"}",
"{\"title\": \"Response to Reviewer 3 -- 1/4\", \"comment\": \"We thank the reviewer their insightful comments. Following these suggestions, we have carefully and extensively improved the paper.\\n\\n~~~\\nReviewer's comment: Most importantly the paper seems to be lacking proper positioning within the space of knowledge distillation and student-teacher training which leads to an unclear message about the novelty of the paper. Student-teacher training for knowledge distillation is not novel and the message that it is possible to transfer knowledge using student-teacher training is unsurprising given that it has been shown before that you can transfer knowledge without any observed data (e.g. KegNet: Knowledge Extraction with No Observable Data. Yoo et al. NeurIPS 2019). \\n~~~\\nAs part of our revision, we reviewed all reviewers' recommended papers as well as some other related papers. Accordingly, we updated the introduction section to better position our algorithm within the space of knowledge distillation and student-teacher training. \\n\\nHere we introduce representative algorithms on knowledge distillation (Hinton et al., 2015; Yoo et al., 2019; Lopes et al., 2017). These algorithms needed to train student models relying on the training dataset of the teacher model (Hinton et al., 2015), metadata (Lopes et al., 2017), or additional generative networks (Yoo et al., 2019). As pointed out by Yoo et al. (2019) and Lopes et al. (2017), Hinton\\u2019s knowledge distillation (Hinton et al., 2015) needs to access the teacher-training dataset (labeled or unlabeled). Lopes et al. (2017) required producing metadata during training and the student model had to be trained based on this metadata. Yoo et al. (2019) employed an additional generative network to generate artificial dataset for training the student model. Therefore, it was necessary to design and train a generative network for each teacher model, which increased the computation burden and finally determined the performance of the student network. This process would be even more challenging if there were multiple teacher models. **Since the teacher training dataset, metadata, and generative network were coupled with the teacher network, they limited the application of these algorithms. Comparatively, our algorithm is simple but effective, does not rely on additional generative networks and does not have any requirements on the teacher-training data or metadata.**\\n\\nWe showed that the transferal dataset was allowed to be different from the teacher training-dataset. For example, we used PASCAL VOC2020 as transferal dataset to transfer the ultrasound segmentation knowledge, and it worked well. Even though our algorithm is straightforward, it was able to solve a particularly challenging problem (even to complex algorithms such as (Yoo et al., 2019)). \\n\\nOur revised paper presents the explanation in **Section 1**-- \\u201cKnowledge distillation is the process of transferring the knowledge of a large neural network or an ensemble of neural networks (teacher) to a smaller network (student) (Hinton et al., 2015). Given a set of trained teacher models, one feeds training data to them and uses their predictions instead of the true labels to train the student model. For effective transfer of knowledge, however, it is essential that a reasonable fraction of the training examples is observable by the student (Li et al., 2018) or the metadata at each layer is provided (Lopes et al., 2017). Yoo et al. (2019) used a generative network to extract the knowledge of a teacher network, which generated labeled artificial images to train another network. As can be seen, Yoo et al.\\u2019s method had to train an additional generative network for each teacher network.\\u201d \\n\\n**Reference**\\n\\n*Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.*\\n\\n*Raphael Gontijo Lopes, Stefano Fenu, and Thad Starner. Data-free knowledge distillation for deep neural networks. arXiv preprint arXiv:1710.07535, 2017.*\\n\\n*Jaemin Yoo, Minyong Cho, Taebum Kim, and U Kang. Knowledge extraction with no observable data. In Advances in Neural Information Processing Systems, pp. 2705\\u20132714, 2019.*\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We really appreciate the encouraging comments. To better present the study, we have revised and polished the paper extensively.\\n\\n~~~\\nReviewer's comment: The authors have used five segmentation networks, it is suggested that the selection of these five algorithms is further justified. \\n~~~\\n\\nPlease find our explanation in **Section 3** -- \\u201cWe posit that if the teacher model is ineffective, a more capable student model can easily achieve similar performance; if the student model, however, is ineffective, the cause is not easily identified. It can be attributed to the knowledge transfer algorithm or inherent limitations of the student model itself. We experimented with five state-of-the-art deep neural networks, which have different architectures and have been shown to be the best on various segmentation tasks.\\u201d \\n\\nAlso, we would like to briefly introduce each method: \\n- DeepLabv3+ is one of Google\\u2019s latest and best performing semantic segmentation models. \\n- U-Net has been the most widely used network for medical image segmentation. \\n- AttU-Net is a modification of U-Net, where attention gates help the model focus on the target. \\n- SDU-Net has demonstrated better performance by using only 40 percent of the parameters in U-Net. \\n- Panoptic-FPN merges semantic segmentation and object detection, which provides a more rich and complete segmentation.\\n\\n~~~\\nReviewer's comment: One of the major concern in medical imaging domain is the black-box nature of the DL algorithms used, authors should comment on how relying on this black-box nature for knowledge transfer would effect the interpretability of these results. \\n~~~\\n\\nThank you for highlighting this important point. \\n\\nThe main difference between our knowledge transfer algorithm and conventional deep learning training is that our transferal dataset is pseudo-annotated where the mask has no physical meaning. Interpretable machine learning techniques can be grouped into two categories: local interpretability and global interpretability. Local interpretability examines an individual prediction of a model locally, trying to figure out why the model makes the decision it makes. Global interpretability implies that the user can understand how the model works globally by inspecting the structures and parameters of a complex model. Since local interpretability does not need the original training dataset, our knowledge transfer does not complicate the interpretability. However, it may complicate the global interpretability, as the parameters of the neural network are closely related to the training dataset. It would be interesting to study how interpretability is affected by knowledge transfer. \\n\\nDue to page limitation, we condensed the discussion in **Section 6** -- \\u201cMedical imaging applications requires interpretable algorithms which generate confidence in the clinical user. Knowledge transfer employs pseudo annotation for training which has no physical meaning. It is imperative to examine and quantify the interpretability of student model before deploying models clinically.\\u201d \\n\\n~~~\\nReviewer's comment: The method relies on three datasets and three models to come up with the final target segmentation, what are the requirements on the size of these datasets, in general the authors should discuss the effect of this on the overall performance. \\n~~~\\n\\nOut the three datasets, only the pseudo-annotated transferal dataset is novel in this study. So, our discussion and revision mainly focuses on the transferal dataset. Please find our explanation below:\\n\\n - **Teacher-training dataset:** the teacher-training dataset is used to train the teacher model, which is the conventional training with ground truth. Our algorithm aims to solve the issue of knowledge transfer when the teacher-training dataset is not accessible; as such, the size of teacher-training dataset is not the primary focus of this study. \\n\\n - **Transferal dataset:** As part of our revision, we detailed our method to refine the transferal dataset by excluding images without salient targets in the pseudo mask (in **Section 2**). Our experiments found that smaller datasets, wherein images without salient targets in the pseudo annotation are excluded, could result in better performance. Table 2 and Table 3 demonstrate that all student models trained on pseudo-annotated Baheya Breast Lesion resulted in Dice scores similar to the teacher model, although the size of Baheya Breast Lesion is much smaller than the teacher-training dataset. We condensed the discussion in **Section 6** --\\u201cTransferal dataset with a large number of images of various contents has higher possibility to capture rich targets in the pseudo masks, but it may also include many images without salient targets. Future work includes optimization of a single transferal dataset or combine multiple transferal datasets to build a better one\\\". \\n\\n - **Fine-tuning dataset:** Similarly, the fine-tuning dataset is similar to that of conventional fine-tuning.\"}",
"{\"title\": \"Summary of Revision\", \"comment\": \"We sincerely thank the reviewers for their valuable feedback and insight. We have carefully and extensively revised and polished the paper. We hope that our response will fulfill the desired alterations.\\n\\nThe major revisions are summarized below. \\n\\n**Introduction:** We introduced knowledge distillation algorithms to contrast our method within the space of knowledge distillation and student-teacher training. \\n\\n**Algorithm:** We reorganized the algorithm section. The main stay of our algorithm is knowledge transfer (Algorithm 1). However, if a downstream task is accompanied with a small dataset with ground truth, our knowledge transfer can be used together with fine-tuning (Algorithm 2 in Appendix). We make clear distinctions between the two algorithms and describe them in more details. \\n\\n**Experiment:** We excluded the experiment on \\\"direct learning\\\" which lacked relevance for knowledge transfer. Meanwhile, we added 1) the experiment on knowledge transfer that only uses images with salient targets in the pseudo mask, and 2) the experiment on knowledge transfer from multiple weak teachers. We also defined and calculated the \\\"knowledge transfer capability\\\", which measures the capability of a transferal dataset to transfer the knowledge from a teacher model to a student model.\\n\\n**Discussion:** We added the discussion section as suggested by the reviewers and provided our insights into the study. \\n\\n**Key terms:** We changed some key terms for ease of understanding. \\n\\n- \\\"dataset agent\\\" --> \\\"transferal dataset\\\" \\n- \\\"educated student\\\" --> \\\"trained student\\\" \\n- \\\"pseudo annotation\\\" --> \\\"pseudo mask\\\"\"}",
"{\"title\": \"The authors present a network agnostic framework for student teacher training paradigm. The experiments and results are presented for medical imaging datasets, where annotations are hard to achieve.\", \"review\": \"In this work the authors propose to transfer knowledge between teacher and student networks trained on separate datasets, and claim to overcome challenges in availability of data annotations for challenging semantic segmentation in medical imaging domain.\\n\\nStrengths\\nThe proposed model is simple to follow and is targeted towards a significant problem in medical imaging analysis domain. \\n\\nComments\\nThe authors have used five segmentation networks, it is suggested that the selection of these five algorithms is further justified. \\nOne of the major concern in medical imaging domain is the black-box nature of the DL algorithms used, authors should comment on how relying on this black-box nature for knowledge transfer would effect the interpretability of these results. \\nThe method relies on three datasets and three models to come up with the final target segmentation, what are the requirements on the size of these datasets, in general the authors should discuss the effect of this on the overall performance.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting direction but lacks positioning in related work and clearer experimentation\", \"review\": \"The paper proposes to use student-teacher training as a way of knowledge transfer between neural networks with different architectures without access to the source data. Instead the authors propose to use a separate dataset to transfer the knowledge of the teacher network and a potential different dataset for fine-tuning. The paper evaluates their method with various segmentation architectures by pretraining a DeepLab v3+ on an internal breast lesion dataset and testing transfer and fine-tuning using different medical datasets. The authors find that knowledge transfer performs similar to regular transfer learning in most combinations of datasets.\\n\\nI believe that the paper tackles an interesting scenario of transferring knowledge from a fixed pre-trained network to a potentially different application without access to the original training data and sets up an extensive set of combinations of target tasks, student network architectures and transferral dataset (called dataset agent in the paper).\\n\\nHowever, I cannot recommend the paper for acceptance in its current form. Most importantly the paper seems to be lacking proper positioning within the space of knowledge distillation and student-teacher training which leads to an unclear message about the novelty of the paper. Student-teacher training for knowledge distillation is not novel and the message that it is possible to transfer knowledge using student-teacher training is unsurprising given that it has been shown before that you can transfer knowledge without any observed data (e.g. KegNet: Knowledge Extraction with No Observable Data. Yoo et al. NeurIPS 2019).\\nFurther, the experiments do not contribute any new insights about how to chose the best student network nor which transferral dataset to use even though the introduction refers to unsuitability of certain pretraining tasks for a different target task. The first example (Table 2) seems to be using an internal dataset that randomly has been split into train/val/test splits and therefore resembles no real transfer learning task and it seems unsuprising that the 'direct learning' approach yields the best results. The second example (Table 3) using the Baheya breast lesion dataset seems to tackle the problem of unsupervised domain adaptation rather than transfer learning: the teacher and target dataset both tackle breast lesion segmentation on ultrasound images. Here it could be that using the target dataset as transferral dataset might help to adjust batch statistics for potential normalisation layers to improve the performance. This leaves the last two examples as only real transfer learning experiments. Lastly, the whole setup assumes that the input and output space of the student and teacher network are always the same, while it is argued that this approach allows for flexibility in difference of network architectures between the student and teacher network. However, semantic segmentation tasks in medical imaging often appear with various numbers of classes and 'input channels' requiring more advanced knowledge distillation -- this could be an interesting problem to tackle in a later version of this work.\", \"further_comments\": [\"Data preprocessing: What preprocessing are you using for the training of the networks? How are you handling different shapes of the images? Are the segmentation algorithms trained on the full images or on patches?\", \"Transferral datasets: Looking at the images in Fig. 2, it seems that even the same modality ultrasound images show different sorts of image artefacts - do you clean those at all? Do you think domain shift might be something that's interacting with your setup?\", \"Do you have any theoretical or intuitive justification why you would want to perform knowledge transfer using unrelated data (skin lesion / different anatomy etc)? Why should this be better than using no data at all or regular computer vision datasets? Do you think the number of training examples for transfer matters?\", \"What do example segmentations look like? Are there similar shapes for different datasets? Does the network also learn some sort of shape prior? (see Oktay, et al. Anatomically constrained neural networks (ACNNs): application to cardiac image enhancement and segmentation. 2017)\", \"Which loss did you use: CE / Dice-loss or a combination of the two?\", \"The term dataset agent is already used in the abstract and is not very clear - I'd personally find something like 'transfer[ral] dataset' easier to grasp.\", \"Introduction, first paragraph: 'black-[space]box' -> 'black-box'\", \"Introduction: 'network(teacher)' -> 'network (teacher)'\", \"What's a latent dataset? I would rather simply refer to 'learned representations' or 'knowledge'\", \"'XXX' is already used in the introduction and not explained - I would simply refer to 'internal / in-house datasets'. Also, note the comment on the breast lesion dataset only being a single dataset with different splits.\", \"I have not seen the term 'educated' in reference to neural networks before - it would be more common to say 'trained'.\", \"Section 5.3.2): You mention that the networks trained from scratch have poor performance because of the small tuning dataset - I guess you are referring to the small training set?\", \"Another potential reference for knowledge transfer for medical imaging could be Kuzina, et al. Bayesian Generative Models for Knowledge Transfer in MRI Semantic Segmentation Problems. 2019\", \"As this work is relatively application-specific it might be better suited for one of the more medically inclined venues like MIDL, MICCAI, ...\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The proposed idea is not novel\", \"review\": \"The paper describes a knowledge transfer technique based on training a student network using annotation creating by a teacher network. This is actually not a summary of the method but the method itself. Most the the rest of the paper is devote do describe experiment details.\\n\\nThe idea is well known in machine learning community see e.g. Distilling the Knowledge in a Neural Network by Hinton. where is used to transfer knowledge from a huge network to a small network. Hence, there is not much novelty in the paper.\\n\\nAlthough the method is very simple it is difficult the follow the experimental results. It is written in a very unclear way.\\nDo you use step 3 in the experiments? \\n\\nwhat is your conclusion regarding parameter fine tuning vs. your approach?\\n\\nOver all the paper is more suitable for a medical imaging conference than fro a general deep learning conference.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
8VXvj1QNRl1 | On the Transfer of Disentangled Representations in Realistic Settings | [
"Andrea Dittadi",
"Frederik Träuble",
"Francesco Locatello",
"Manuel Wuthrich",
"Vaibhav Agrawal",
"Ole Winther",
"Stefan Bauer",
"Bernhard Schölkopf"
] | Learning meaningful representations that disentangle the underlying structure of the data generating process is considered to be of key importance in machine learning. While disentangled representations were found to be useful for diverse tasks such as abstract reasoning and fair classification, their scalability and real-world impact remain questionable.
We introduce a new high-resolution dataset with 1M simulated images and over 1,800 annotated real-world images of the same setup. In contrast to previous work, this new dataset exhibits correlations, a complex underlying structure, and allows to evaluate transfer to unseen simulated and real-world settings where the encoder i) remains in distribution or ii) is out of distribution.
We propose new architectures in order to scale disentangled representation learning to realistic high-resolution settings and conduct a large-scale empirical study of disentangled representations on this dataset. We observe that disentanglement is a good predictor for out-of-distribution (OOD) task performance. | [
"representation learning",
"disentanglement",
"real-world"
] | Accept (Poster) | https://openreview.net/pdf?id=8VXvj1QNRl1 | https://openreview.net/forum?id=8VXvj1QNRl1 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"PzOR8nZb9jJ",
"EBI-WThX2ri",
"ksDsS89iYk9",
"72WCnWWbzH1",
"lgy2qms_w0f",
"CxilaU-B9I0",
"DFD-INSEhnN",
"ci2ulrcgR-O",
"RdIANJDfnSG"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040490084,
1605650418662,
1605650086799,
1605649939339,
1605649762995,
1603816652367,
1603800599975,
1603800129977,
1603726472491
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3746/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3746/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3746/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3746/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3746/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3746/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3746/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3746/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper introduces a new dataset for evaluating disentanglement and its impact on out of distribution generalization based on the trifinger robotics platform. Using this dataset, the authors rigorously investigate the performance of beta-VAEs in this setting under a number of conditions, finding that weak supervision is necessary to induce disentangled representations, and that, perhaps surprisingly, disentanglement does not help for sim2real settings despite the similarity between the simulator and the real data. Reviewers were divided on the work, but had a number of concerns related to the claims of novel architecture, comparisons to baselines, and issues with the clarity of the paper, some of which were addressed in the authors' response. I agree with some of these concerns, particularly with respect to the claims of novel architectures since the modifications could simply be viewed as tweaking hyperparameters and are not rigorously compared to baselines. However, I think the novelty of the dataset and the rigorous evaluation of OOD generalization settings is likely to be valuable enough to the community to merit acceptance. I'd encourage the authors, however, to tone down some of the claims regarding the architecture (or provide sufficient baseline comparisons), and instead focus on the dataset and the OOD results. I recommend acceptance.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the reviewer for taking the time and effort to review our paper. It is encouraging to see the reviewer considers our work convincing and important, and in particular the dataset and the experimental setup to be potentially extremely useful to the community going forward.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We are grateful to the reviewer for the very extensive and constructive feedback, which already helped us improve the clarity of the paper. We are pleased to read that the reviewer considers our work \\u201ca significant contribution in experimental analysis on the performance of disentanglement with the inclusion of realistic complexities\\u201d. We revised the manuscript, especially on the OOD evaluation setup in Section 4 and Appendix A, to address issues pointed out by the reviewer in cons 3, 5, 6, 7, 8, 9, 10, and question 2. We address the remaining reviewer\\u2019s concerns point-by-point below.\", \"cons\": [\"1\\\\) We chose this approach for consistency with previous work [1,2,3]. However, we agree that it would be interesting to evaluate disentanglement metrics on held-out examples.\", \"2\\\\) Comparing the effectiveness of different disentanglement methods in our dataset is indeed interesting and relevant for the community. However, since evaluating the relationship between disentanglement and OOD generalization already required significant compute, we chose to limit the study in terms of disentanglement methods.\", \"3a\\\\) Many of the weakly supervised models are highly disentangled, while none of the unsupervised models are. This can be seen from: (1) the main text, where we report how many weakly supervised models have >99% DCI score, (2) the violin plots in Fig. 3 left, and (3) the scatter plots in Appendix B.4 in the new version of the paper.\", \"3b\\\\) Regarding latent traversals: In our experience, disentanglement metrics do not always behave as expected [4]. Model visualizations such as latent space traversals remain the gold standard for assessing disentanglement.\", \"4\\\\) In our study we observed from visual inspection (see point 3b) that SAP and Modularity give similar scores to entangled and disentangled models, which is not desirable, while DCI and MIG more reliably differentiate between them. Limitations of disentanglement metrics are also discussed in [4].\", \"5\\\\) No, we evaluate the same representations on disentanglement and OOD generalization. Thanks for pointing this out - we removed that sentence as it was misleading.\", \"6\\\\) D1 was selected from D by sampling images where the cube color is in the desired set. Regarding in-distribution generalization: we only report it in terms of the GBT10000 metric, for consistency with previous work. Both GBT10000 and the downstream predictors are trained on 10k samples and tested on 5k as in [1,2,3].\", \"7-10\\\\) We clarified the setup of the OOD evaluation in the revised manuscript. About 10): note that the test sets are OOD in terms of cube color, which is also taken into account when evaluating disentanglement. So it is not true that the data is not out-of-distribution in terms of the factors the model was evaluated on disentanglement with respect to.\", \"11\\\\) The justification for Gaussian noise is already given in the experimental setup paragraph in Section 3.\"], \"questions\": [\"1\\\\) We used the bullet physics engine to render images of this scene given specific properties, i.e. the factors of variation. We assumed that the factor names were self-explanatory, but we are happy to include more detailed information if deemed helpful. Note that the separate factors are also visible in the latent traversals from the models with high disentanglement.\", \"2\\\\) The 8 color hues are the cube color hues. We realize this might be unclear and fixed it in the revised version.\", \"3\\\\) We performed basic ablation studies to find an architecture that would allow learning fully disentangled representations. These are only briefly mentioned in the appendix, without numerical results.\", \"4\\\\) We sweeped only through latent dimensions bigger than the number of ground truth factors of variation, as is typically done in other disentanglement studies [1,2,3].\", \"5\\\\) It requires access to images with arbitrary combinations of factors of variation, which is not possible in our dataset, as it is not generated on-the-fly.\", \"6\\\\) This is a very interesting idea which we left for future work. Note that the correlation we observed between ELBO (or reconstruction loss) and disentanglement is very strong - in fact, it is stronger than the one reported in the UDR paper for the unsupervised setting.\", \"7\\\\) Any measure of mean error would work, as we are mainly interested in the relative performance of different models. We chose the MAE for its relatively intuitive interpretation.\", \"[1] Locatello et al. \\\"Challenging common assumptions in the unsupervised learning of disentangled representations.\\\" ICML 2019.\", \"[2] Locatello et al. \\\"On the fairness of disentangled representations.\\\" NeurIPS 2019.\", \"[3] van Steenkiste et al. \\\"Are Disentangled Representations Helpful for Abstract Visual Reasoning?\\\" NeurIPS 2019.\", \"[4] Locatello et al. \\u201cA Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation.\\u201d JMLR, 2020.\"]}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for the feedback. We believe most of this reviewer\\u2019s concerns to be due to misunderstandings of the primary objectives and contributions of this work, which we hope to clarify below.\\n\\n1\\\\) On the weakly supervised setting:\\n\\n1a\\\\) \\u201cThe authors proposed a unique learning scheme [...]\\u201d: In fact, we only use existing SOTA methods for disentangled representation learning: we train beta-VAEs (1) without supervision by optimizing the ELBO, and (2) with weak supervision using Ada-GVAE [1]. The contributions of this work do not include proposing a new method for disentangled representation learning.\\n\\n1b\\\\) \\u201cthe authors expected such a training scheme [...]\\u201d: This training scheme had already been shown to be more effective than unsupervised learning [1]. Our results confirm this.\\n\\n1c\\\\) \\u201cMoreover, assuming [...] would not be practical\\u201d: We chose k=1 because this has been shown to lead to higher disentanglement than k>1 [1]. We stress that we are not interested in the specific learning method, but rather in evaluating the role of disentanglement. The question of whether Ada-GVAE with k=1 is generally practical is certainly relevant, but orthogonal to this paper (see [1] for a discussion).\\n\\n2\\\\) Comparison to baseline or SOTA representation disentanglement methods.\\nThe focus of our study is on the effect of disentanglement on OOD generalization - not on comparing different disentangled representation learning methods. We employ SOTA disentanglement learning methods (from disentanglement_lib [2]) to learn representations with a wide range of disentanglement, which in turns allows a sound empirical study on downstream OOD generalization.\\n\\n3\\\\) Quantitative metrics selected by the authors not sufficiently informative or supportive.\\nThe selected disentanglement metrics are widely used in the context of autoencoder-based representation learning methods [2-9]. Nevertheless, we gladly welcome any suggestions for more informative disentanglement metrics.\\n\\n4\\\\) \\u201cSince VAE are simply trained in a unsupervised way [...] I see no evidence why the resulting features would be any different from those derived from standard VAEs, and why improved disentanglement results could be achieved.\\u201d\\nWe train VAEs either with the standard unsupervised approach or with the Ada-GVAE weakly supervised method. We want to stress that Ada-GVAE is a fundamentally different training algorithm that relies on weak labels, so it is not true that all our VAEs are trained in an unsupervised way. Moreover, we do not claim the weakly supervised representations should necessarily be more disentangled - we simply observe empirically that they tend to be, which is also in agreement with results in [1].\\n\\n[1] Locatello et al. \\u201cWeakly-supervised disentanglement without compromises.\\u201d ICML 2020.\\n\\n[2] Locatello et al. \\\"Challenging common assumptions in the unsupervised learning of disentangled representations.\\\" ICML 2019.\\n\\n[3] Chen et al. \\u201cIsolating sources of disentanglement in variational autoencoders.\\u201d NeurIPS 2018.\\n\\n[4] Ridgeway and Mozer \\u201cLearning deep disentangled embeddings with the f-statistic loss.\\u201d NeurIPS 2018.\\n\\n[5] Eastwood and Williams \\u201cA framework for the quantitative evaluation of disentangled representations.\\u201d ICLR 2018.\\n\\n[6] Kumar et al. \\u201cVariational inference of disentangled latent concepts from unlabeled observations.\\u201d ICLR 2018.\\n\\n[7] Locatello et al. \\\"On the fairness of disentangled representations.\\\" NeurIPS 2019.\\n\\n[8] Locatello et al. \\u201cA Sober Look at the Unsupervised Learning of Disentangled Representations and their Evaluation.\\u201d JMLR, 2020.\\n\\n[9] van Steenkiste et al. \\\"Are Disentangled Representations Helpful for Abstract Visual Reasoning?\\\" NeurIPS 2019.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for the helpful feedback and appreciate the comments that we offer a solution to weaknesses of previous datasets and carry out thorough experiments. We have updated the paper and we hope some of the issues might be resolved. We address the reviewer\\u2019s concerns individually below.\\n\\n1a\\\\) On the context, scope and contributions of this work.\\nWe would like to emphasize that the primary goal of our work was to empirically investigate transfer capabilities under different OOD scenarios of disentangled representations using popular disentanglement learners beyond synthetic toy datasets. We called our setting \\u201crealistic\\u201d because (1) it exhibits challenges of a real-world robotic setup, with real images as well as simulated images generated with the bullet physics engine renderer, (2) we study transfer on the real setup (using the accompanied annotated images) and (3) it opens up the possibility for evaluations on downstream-tasks such as RL that can be deployed and tested on the real robot. Nevertheless, we would greatly appreciate any suggestions on how to improve the title.\\n\\n1b\\\\) On the assumption that disentanglement helps for transfer.\\nAlthough one might expect that disentanglement helps for OOD generalization, this is actually not obvious. Therefore, we believe there is value in exploring this empirically, especially since we are not aware of literature that investigates this in a large-scale study. Interestingly, our results indicate for example that disentanglement is not necessarily useful in the OOD2 setting, which provides further proof that these relationships should be thoroughly investigated.\\n\\n1c\\\\) On novel methods to improve OOD2 generalization. \\nWe believe our results offer insights that may help progress towards achieving broader out-of-distribution generalization, which is a long-standing challenge in machine learning. Possible research avenues for improving generalization in the OOD2 scenario may include stronger inductive biases and different imposed structures on the latent space. However, we leave exploring these approaches to future work.\\n\\n2\\\\) On sim-to-real results with Gaussian noise.\\nIn our study we observed that adding noise seems to be more beneficial for generalization than disentanglement, and we believe this to be an important insight. However, we do not claim that this would be sufficient for effective sim-to-real transfer, which is in fact not the main focus of our work.\\n\\n3\\\\) The dataset is not really challenging.\\nWe want to stress that in the context of disentanglement studies this dataset is certainly more challenging and complex than previous ones. Even though it might not seem so hard for neural networks, previous work on disentanglement only focused on very simple models (encoder and decoder have 4 conv layers and 2 FC). Moreover, previous architectures are often just not enough for this type of data (see for example reconstructions on simpler datasets like SmallNORB in [1]).\\nWe agree CLEVR is a very interesting dataset, but as the reviewer points out it\\u2019s not straightforward how disentanglement should be interpreted in this context. About occlusions, note that our dataset also contains heavy occlusions, unlike previous disentanglement datasets. Moreover, our dataset is derived from a robotic platform so one can directly move towards RL tasks in future work, unlike with CLEVR.\\n\\n[1] Locatello et al. \\\"Challenging common assumptions in the unsupervised learning of disentangled representations.\\\" ICML 2019.\"}",
"{\"title\": \"Official review\", \"review\": \"Summary:\\nThis paper identifies that traditional datasets used for learning disentangled representation have several shortcomings such as no correlation between variables and simple structure.\\nIt proposes a new dataset that has 1M higher-resolution simulated images along with 1K annotated real-world images of the same setup and gives analysis on disentangled representations on the dataset. \\nIts results suggest that disentangled representations can result in better out of distribution task performances. \\n\\n==================================\", \"strength\": [\"Identified weaknesses of the previous datasets and proposed a new dataset that exhibits correlations between different variables. This is an important aspect of real-world scenarios.\", \"Provided thorough experiments on disentangled representations and their metrics on the proposed dataset.\"], \"weakeness\": [\"Experimental results for showing more disentangled representation results in better OOD task performances is somewhat expected. Unlike the title, it is not clear if the paper shows sufficient transfer of disentangled representations in realistic settings. It would be great to propose a way to make the generalization better for the settings the models have trouble with (OOD2 generalization).\", \"This paper tried one approach by adding noise during training, but it leads to my second concern that the real world observation is very similar to the simulated data. With some gaussian noise, they would look similar. Therefore, it might not be sufficient to show that we can use this approach for sim-to-real transfer.\", \"The paper claims that the proposed dataset (which is interesting) is challenging and highly complex, but the rendered images look easy enough for a simple neural network model to learn to reconstruct. Datasets like CLEVR, although they might not be used for measuring disentanglement, seems more complex and exhibits occlusions.\", \"==================================\", \"While the proposed dataset is interesting, I am not confident this paper is showing evidence for the usefulness of disentangled representations for transfer learning in realistic settings. It is also not clear if the dataset is more useful than others because of the reasons stated above.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Not technically sound and lack of sufficient evaluation\", \"review\": \"The authors proposed a unique learning scheme for representation disentanglement. However, unlike infoGAN or ACGAN which explicitly learn disjoint feature representations for describing the attributes of interest (via unsupervised and supervised settings, respectively), the authors chose to address this task in a questionable \\\"weakly supervised setting\\\". More specifically, the authors chose to train AE-like model using pairwise images, which the difference between each pair of the inputs is only associated with one attribute of interest (e.g., angle, position, etc.).\\n\\nFor some reasons, the authors expected such a training scheme would result in learning feature representation in which only \\\"one\\\" feature dimension would reflect such attribute differences. This is a very strong assumption, since it is very likely that more than one feature dimensions would correspond to such changes. \\n\\nMoreover, assuming that precisely one feature dimension would be associated with the attribute of interest by feeding in a pair of images with exactly this attribute change would not be practical either. Most real-world images would be complex and contain multiple attributes. Making this assumption would imply that the training images are not realistic. \\n\\nAs for the evaluation, there is no comparison to any baseline or SOTA representation disentanglement methods, I found the quantitative metrics selected by the authors not sufficiently informative or supportive either. Most importantly, the authors claimed that the features trained by VAE allowed improved performances (e.g., Figs 3~5). Since VAE are simply trained in a unsupervised way (even the authors called their setting a weakly supervised setting), I see no evidence why the resulting features would be any different from those derived from standard VAEs, and why improved disentanglement results could be achieved.\\n\\nBased on the above observations and remarks, I feel that the authors would not be able to deliver a work which is technically strong with sufficiently complete evaluation. Therefore, I do not think this paper is above the ICLR standard for acceptance.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review\", \"review\": \"The work has sufficient quality, and is sufficiently clear and original. For detail, please see the below pros and cons.\", \"overall_pros\": \"- Dataset contains dependencies in FoV, not artificially induced but present due to the attempt at realism in the simulation, a realistic aspect which hasn't been addressed in previous work. \\n-The complete set of factors (e.g. product of number of possible values for each factor), totals to 2.92B, thus the dataset contribution allows for addressing the generalization problem which hasn't been addressed in previous work. \\n-Disentanglement metrics notably cannot be evaluated on acquired camera images, so the recording of an annotated dataset under the same conditions in a real-world setup enables researchers to study this problem in a more controlled manner. \\n-The observation that deeper and wider autoencoders can arguably scale to datasets with fine-grained factors is an informative observation for the community. \\n-Similar to [1], the results collected in the extensive experimental study provide information to the community which can be analyzed and built upon in future work. \\n-The OOD evaluation is novel, interesting, and informative. Though the downstream task is quite connected with the disentanglement evaluation itself, the consideration of this setting is a contribution to the community. \\n-The negative observation in sim-to-real transfer is a net positive for this work, as it is informative to the community in specifying a problem which can be worked on in future research. \\n-The observation that adding input noise in training, a common tool in classic sim-to-real robotics work, is useful for transfer in representations is practical and meaningful.\", \"overall_cons\": [\"For the evaluation prior to OOD evaluation, it appears disentanglement is evaluated on the full collected dataset. If that is the case, though the dataset itself provides the ability to evaluate generalization, the evaluation prior to OOD evaluation does not evaluate generalization. It is not clear then if overfitting played a role in the observed results.\", \"Fixing k=1 for the weakly supervised method implies control over the data generating process often unavailable in practical applications. At the same time, said experiment can be extended to test how performance degrades with a mismatch in assumptions. Considering k != 1, as well as PCL [2] and SlowVAE [3] (which assume laplacian transitions) could provide interesting comparisons.\", \"The authors state that \\\"many\\\" trained models fully disentangle the factors, but this is \\\"only\\\" possible in the weakly supervised scenario. Clarifying details would be beneficial here, is it true that none of the unsupervised models fully disentangled the factors? If some did, was there an underlying correlation between successful models that can be discussed, or was it simply random? Is using latent traversals alone as a basis sufficient for claiming \\\"full disentanglement\\\"? More specific statements in the description would be desirable.\", \"The authors appear to pre-assume that since latent traversals agree with the theory of [1], quantitative metrics which do not score weakly supervised models over unsupervised models are \\\"ineffective at capturing disentanglement in this setting\\\". This reads as confirmation bias and should clarified, either by justifying why metrics which do not agree with latent traversals are \\\"ineffective\\\", or presenting the information without implying judgement on whether the metrics are \\\"ineffective\\\" or not.\", \"\\\"We stress that the OOD2 scenario, which is typically not studied extensively, is only possible because the representations are not trained on the entire dataset.\\\" Does this mean the representations learned for OOD evaluation are trained on less data than what was considered for disentanglement evaluation? Clarifying details would be beneficial here, as this standalone statement simply yields unanswered questions.\", \"How was D1 selected from D, by what sampling process, and how was the number of samples decided upon? For in-distribution generalization, how was the split between training and held-out set in D1 conducted, what was the percentage for the split, and how was it selected? Clarifying detail would be beneficial.\", \"\\\"Since the values are normalized, we can take the average of the MAE over all factors (except for the FoV which is OOD).\\\" Why can you not take the average of the MAE over the FoV which are OOD? This statement could be interpreted as some factors being OOD while some not, when a split within the factor set was not mentioned previously. This sentence requires clarification.\", \"Many questions brought up by Section 4 (such as the above), are addressed in part in the experimental setup section for Section 5. I'd suggest providing said information earlier so the confusion yielded in Section 4 is mitigated.\", \"Notably, the color hues simply mentioned in Table 1, without any explanation on why they are not included within Table 1, appear to be what is ablated upon for OOD1. It would have been beneficial to have earlier clarification on this point. Furthermore, distribution shift in the color hue but all underlying factors being the same is specific type of distribution shift not clarified to the reader. Consideration of discrepancies in renderer would be interesting, but more importantly, discrepancies in the factor value set. The distribution shift considered here is shift in nuisance factors, a useful test but limited in its scope.\", \"\\\"Our results therefore suggest that highly disentangled representations are useful for generalising out-of-distribution as long as the encoder remains in-distribution\\\" This statement can be seen as a bit misleading given the data is only out-of-distribution in terms of nuisance factors, not the factors the model was evaluated on disentanglement with respect to. It's intuitive that a highly disentangled representation will capture the ground truth factors well, and thus be more robust to shift in nuisance factors, and it is useful to show this, but the fact that we are looking at shift in nuisance factors should be clear to the reader.\", \"The justification for training half of the models with gaussian noise is provided in the final page of the paper. As with the previous comments, it would be beneficial to the reader to make this clear when introducing the experimental design decisions.\"], \"conclusion\": \"This work provides significant contribution in experimental analysis on the performance of disentanglement with the inclusion of realistic complexities. Both the positive and negative observations are clear, interesting, and informative. While I suggested many ablations which could provide the reader with further information, I don't see the lack of inclusion of said ablations as an issue in the work, the experimental study is already extensive. The main issue I see with the work as it is merely comes from a writing standpoint, ensuring the statements made are justified, and ensuring the detail is there in the descriptions for the reader to understand, and most importantly, be able to reproduce, said results.\\n\\nOverall, I see this is a valuable contribution, but would like to see improvement in the writing given the points brought up in the revision.\", \"questions\": \"- Can you describe how given a particular state for the factors of variation, how said factors are processed to render the observation? A description of how each factor individually influences the observation would be beneficial in the main paper, if only briefly, such that the reader can see why it is (1) challenging and (2) requires modeling of fine details. \\n- \\\"The training set for the VAEs contains 8 randomly chosen color hues.\\\" How do these color hues differ from the cube color hues? How do they specifically affect the rendering? Clarifying detail for this statement is needed. \\n- Were any ablation studies performed for the model architecture? Many model details are presented without any experimental testing discussed, an ablation study would be informative to the reader. \\n- Did you test the behavior of models with latent_dim < factor_dim, e.g. latent space dimensionality of 5, as well as latent_dim = factor_dim, e.g. latent space dimensionality of 7? Such results could be interesting to see if, and how much, performance degrades not only as latent_dim progressively increases from factor_dim, but also when latent_dim progressively decreases from factor_dim, considering the equality case as well. \\n- \\\" Finally, note that the BetaVAE and FactorVAE metrics are not straightforward to be evaluated on datasets\\nthat do not contain all possible combinations of factor values.\\\" Could you detail why they are not straightforward to evaluate? Clarifying detail would be helpful. \\n- Did the authors consider other unsupervised model selection techniques, such as UDR, to see if this under or outperformed the weakly supervised loss selection method?\\n- Why the choice of MAE? Was this choice ablated on?\\n\\n[1]: Francesco Locatello, Stefan Bauer, Mario Lucic, Sylvain Gelly, Bernhard Scholkopf, and Olivier \\u00a8\\nBachem. Challenging common assumptions in the unsupervised learning of disentangled representations. In International Conference on Machine Learning, 2019.\\n\\n[2]: Aapo Hyv\\u00e4rinen and Hiroshi Morioka. Nonlinear ica of temporally dependent stationary sources. In Proceedings\\nof Machine Learning Research, 2017. \\n\\n[3]: David Klindt, Lukas Schott, Yash Sharma, Ivan Ustyuzhaninov, Wieland Brendel, Matthias Bethge,\\nand Dylan Paiton. Towards nonlinear disentanglement in natural data with temporal sparse coding.\", \"arxiv_preprint_arxiv\": \"2007.10930, 2020.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The new standard dataset for complex disentanglement learning\", \"review\": \"## Summary\\n\\nThe paper presents a new, more complex, dataset for the use of disentangled representation learning. The dataset is based on real and simulated images of the trifinger robot platform. There are 7 factors of variation with high-resolution measurements of these factors. The dataset contains over 1 million simulated images and another ~1000 annotated images of a real trifinger robot arm.\\n\\nThe authors also present a new neural architecture to scale disentanglement on more complex datasets and present a large empirical study on the performance of various techniques on out-of-distribution downstream task performance.\\n\\n\\n## Quality & Clarity\\nThe paper itself is well written, with a structure that effectively guides the reader through the work and results.\\n\\nThe dataset, significance and experiments are clearly outlined.\\n\\n## Originally & Significance\\n\\nThe novelty of the dataset is clear. Providing a complex disentanglement dataset where the underlying factors of variation are inherently correlated. The in- and out-of-distribution experiments are possible because of the presence of both synthetic and real-world data.\\n\\nThe experiments run are repeated multiple times and the results are convincing. They use both unsupervised and weakly supervised approaches and the results are both intuitive and supported by the literature. \\n\\nThe experiments on out-of-distribution representation transfer are interesting and show that disentangled representations can lead to better transfer to out-of-distribution tasks.\\n\\n\\n## Outcome Rationale\\n\\nThis dataset is likely to be extremely useful to the community going forward and work disentangled representation learning is likely to benefit from it. The experimental setups are sensible and the largescale benchmarks support the use of disentangled representations when transferring from simulated to real-world scenarios.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
sCZbhBvqQaU | Robust Reinforcement Learning on State Observations with Learned Optimal Adversary | [
"Huan Zhang",
"Hongge Chen",
"Duane S Boning",
"Cho-Jui Hsieh"
] | We study the robustness of reinforcement learning (RL) with adversarially perturbed state observations, which aligns with the setting of many adversarial attacks to deep reinforcement learning (DRL) and is also important for rolling out real-world RL agent under unpredictable sensing noise. With a fixed agent policy, we demonstrate that an optimal adversary to perturb state observations can be found, which is guaranteed to obtain the worst case agent reward. For DRL settings, this leads to a novel empirical adversarial attack to RL agents via a learned adversary that is much stronger than previous ones. To enhance the robustness of an agent, we propose a framework of alternating training with learned adversaries (ATLA), which trains an adversary online together with the agent using policy gradient following the optimal adversarial attack framework. Additionally, inspired by the analysis of state-adversarial Markov decision process (SA-MDP), we show that past states and actions (history) can be useful for learning a robust agent, and we empirically find a LSTM based policy can be more robust under adversaries. Empirical evaluations on a few continuous control environments show that ATLA achieves state-of-the-art performance under strong adversaries. Our code is available at https://github.com/huanzhang12/ATLA_robust_RL. | [
"reinforcement learning",
"robustness",
"adversarial attacks",
"adversarial defense"
] | Accept (Poster) | https://openreview.net/pdf?id=sCZbhBvqQaU | https://openreview.net/forum?id=sCZbhBvqQaU | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"GCN1IuI89fQ",
"20tv9w68FUI",
"XNEMnR2FrZh",
"jp5_qVB-b5u",
"tWzIi0FsQjt",
"a0PZbyAcnxv",
"Nurk0Mo-DDs",
"f7IPgfJ8BQ8",
"oaVa5eMhyic",
"zjW8gy6u_Rj",
"vEFm4Yx8QQ"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040497775,
1605639391617,
1605592253968,
1605592195055,
1605592040387,
1605591875544,
1605591782448,
1604037285028,
1603932062804,
1603899240572,
1603853556527
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3741/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3741/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3741/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3741/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3741/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3741/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3741/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3741/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3741/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3741/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper describes a new technique to train an adversarial MDP to perturb the observations provided by the environment. This adversarial MDP is then used to train an RL agent to be more robust. Since the adversarial agent essentially defines an observation distribution for the environment, the RL agent needs to optimize a POMDP. This is nice work that was unanimously praised by the reviewers. It produces stronger adversaries and more robust RL agents than previous work. This represents an important contribution to the state of the art of robust RL.\"}",
"{\"title\": \"Update of my review\", \"comment\": \"Thank you for the detailed response. The additional explanations and experiments have addressed most of my concerns, so I will increase my score to 6.\"}",
"{\"title\": \"Thank you for your questions! Please see our response below\", \"comment\": \"We really appreciate the detailed review comments and we provide our response below.\\n\\n1. Optimality under approximation\\n\\nThis is a very good question, but unfortunately, it is a very challenging question and beyond the reach of our paper. In section 3.1, we formulate the learning problem of an optimal adversary as finding an optimal policy on MDP. Thus, finding the optimal adversary itself becomes a general RL problem. When a deep neural network function approximator is used to solve an RL problem, currently very limited theoretical study is done to show the gap between the learned policy and the optimal one, and in some cases, even the optimal policy itself is beyond our reach so it is very hard to compare the two. For example, although we can combine RL with function approximators to solve the game of Go and outperform top-level human players, it is unclear to us what is the optimal policy for Go and how far our currently learned agents are away from it.\\n\\n\\n2. Improvement from SA Reg\\n\\nAs the reviewer correctly pointed out, ATLA and SA-reg are two complementary methods. Firstly, the SA regularization helps model robustness, as Theorem 5 suggested in Zhang et al. Secondly, as a regularization, SA-reg makes function approximators more smooth so they become more robust under small perturbations and have less \\u201cerrors\\u201d. Both of the two factors are effective and it is hard to distinguish them from each other. When combined with our ATLA framework, we believe the smoothness caused by the regularizer might be more important, as our ATLA framework does not explicitly encourage smoothness. In high dimensional environments like Ant where smoothness is more critical, we can see the ATLA with SA-reg works more effectively than the ones without SA-reg. We added this discussion in Section 4.\\n\\n3. Adversary and agent get stronger over time\\n\\nWe provide additional results on evaluating agents during training. For our best setting (ATLA LSTM + SA reg) we evaluate the model checkpoints with RS attack at 20%, 40%, 60%, 80% training steps. The results are provided in Table 4 in Appendix A.3. The overall trend is that agents are getting stronger over time, achieving better robustness in later checkpoints.\\n\\nIt is unsure how to fairly evaluate the adversary during training since intermediate adversaries are closely related to their corresponding training agents. Similarly, it does not quite make sense to evaluate the accuracy of the discriminator in GAN training, as the discriminator is specific to the generator under training, and what we care about is the performance of the generator (analogous to the agent in our setting). We can access the quality of the adversary by evaluating the robustness of the agent - if the agent is robust to strong adversarial attacks such as RS attacks, the adversary should also be strong, otherwise, the agent cannot learn to be robust. We included an evaluation of the agents in Appendix A.3, as discussed above.\\n\\n\\nWe thank you again for your helpful comments and please let us know if you have any additional questions or concerns.\"}",
"{\"title\": \"Thank you for the comments and suggestions! Please see our response below.\", \"comment\": \"We greatly appreciate the helpful comments from the reviewer. They help us improve our paper a lot. We now answer the reviewer\\u2019s questions below:\\n\\n1. Equation (2):\\nThank you for pointing this out. We have updated Eq. 2 (now becomes Eq. 3), fixed the typos, and added more explanations.\\n\\n2. Deterministic or stochastic $\\\\nu$:\\nSorry for mixing notations here. We have updated our paper and now we keep using the stochastic adversarial notation.\\n\\n3. Figure 1:\\nWe have improved the caption below Figure 1 to make this example more self-contained and also added pointers to later sections in Figure 1. We hope this figure can be a good example in the Introduction to explain that the robustness issue is not just in deep reinforcement learning with function approximation.\\n\\n4. Policy teaching and policy poisoning:\\nWe really appreciate the provided references and we have cited them in the related work section. Policy teaching and policy poisoning manipulate the reward or cost signal during agent training time to induce the desired agent policy. Essentially, policy teaching is a training time \\u201cattack\\u201d with perturbed rewards from the environments (which can be analogous to data poisoning attacks in supervised learning settings), while our goal is to train a robust agent against test time (not training time) adversarial attacks on state observations (not rewards).\\n\\n5. Discussions in experiments:\\nSorry for the confusion. We included both MLP and LSTM agents because in Section 3.2, we show that learning an agent under adversary is a POMDP problem, so a history-dependent policy (LSTM policy) can potentially perform better. We observe that ATLA (LSTM) overall outperforms ATLP (MLP) in experiments, matching our theoretical observation in Section 3.2. We have updated the paragraph on discussion experiment results to cover all methods, and structured this paragraph to clearly distinguish between MLP and LSTM models.\\n\\n6. Discrepancy in the number of trained agents\\nThis is mainly due to computational constraints. To ensure reproducibility, we train each setting multiple times, and each agent is evaluated by hundreds of independent adversaries (because some attacks have hyperparameters, and we run a grid search to train a large number of adversaries). We report the agent with median robustness overall repeated training runs. The computation cost for training and attacks is high so we were not able to repeat all settings by 21 times. \\nDuring the discussion period, we trained more agents and now almost every setting has 21 agents. We find that the median reward under attack is roughly the same as the ones reported in our paper, so our main results are reproducible and remain unchanged.\\n\\nWe thank the reviewer again for the very helpful comments, and please feel free to let us know if any of your concerns are still not addressed or if you have further questions.\"}",
"{\"title\": \"Thank you for the encouraging comments!\", \"comment\": \"Thank you for the great summarization of our main contributions and we really appreciate your encouraging comments. The perspective of understanding SA-MDP from asymmetric competitive multi-agent problems is insightful. We will study further into this direction to connect asymmetric game theory to SA-MDP. Thank you for providing this great insight! Feel free to let us know if you have any further questions regarding our paper.\"}",
"{\"title\": \"[1/2] We added new attacks and baseline results, and further explained Lemma 1\", \"comment\": \"We really appreciate your helpful comments. We have added additional experiments to include the new attack you mentioned, and also compared with existing adversarial training methods. We address your concerns below.\\n1. Novelty:\\nFirst, although the idea of learning an optimal adversary is from a lemma in SA-MDP[2], [2] did not use it to train an adversary for improving agent robustness, and also did not reveal the POMDP nature of learning a robust agent (our Lemma 2); based on this insight, we show the necessity of using non-Markov policies for robust agents with LSTM. No prior works in this area pointed out this connection to POMDP or proposed non-Markov policies for adversarial robustness. Our ATLA framework outperforms existing works using pure regularization [2] and adversarial training with weak adversaries [6], sometimes by a large margin, and achieves state-of-the-art performance.\\nSecond, as you pointed out, our problem is fundamentally different from RARL[1], as RARL focuses on environment changes and we focus on perturbations on state observations. Indeed, learning an adversary has been used as a key idea in many works such as GANs, but our work is the first to follow a solid theoretical framework (SA-MDP) to analyze the learning procedure of the agent and the adversary in the setting of perturbations of observations in RL setting. Importantly, unlike RARL, the agent learning problem is a POMDP in our case.\\nLastly, we empirically demonstrate that the \\u201coptimal\\u201d adversarial attack on DRL agents is very effective, as this attack is backed by solid theory while most existing attacks[3,4,5] are based on certain heuristics. In Figure 3 and Table 1 we show that this attack is significantly stronger than previous attacks. It can become a strong benchmark for evaluating the robustness of DRL agents in future works.\\n\\n2. Lemma 1 and Algorithm 1:\\nThank you for pointing this out. Algorithm 1 and Lemma 1 actually do match, and we have updated our paper to further explain it.\\nSpecifically, in Lemma 1, the reward function $\\\\hat{R}(s, \\\\hat{a}, s\\u2019)$ by definition is a conditional expectation of the adversary\\u2019s reward. The adversary\\u2019s reward is a random variable. See our updated Lemma 1 which includes this expectation explicitly. In Algorithm 1, during agent rollout, $\\\\hat{r} = -r$ is just one sample of this random variable. Formally, we include the derivation of the distribution of $p(\\\\hat{r} | s, \\\\hat{a}, s\\u2019)$ and its expectation in Appendix A, and you will find that when we assign the adversary reward $-r$ when the agent\\u2019s reward is $r$, you will get the expectation in Lemma 1. So there is no discrepancy between Lemma 1 and Algorithm 1.\"}",
"{\"title\": \"[2/2] We added new attacks and baseline results, and further explained Lemma 1\", \"comment\": \"3. Additional experiments and comparisons:\\n\\n 3.1 Additional Attacks:\\nAmong the mentioned attacks, [3] is one step FGSM attack using the value function, which is weaker than the multi-step PGD attack proposed in [6]. Our paper already includes the stronger multi-step attack in [6] (named as \\u201ccritic attack\\u201d )\\n[4] proposed \\u201cstrategically-timed attack\\u201d and the \\u201cenchanting attack\\u201d. The strategically-timed attack used a multi-step gradient based attack similar to [6] for attacking only partial frames to avoid detection, so this is a weaker attack with more constraints for the attacker. In our paper, all attacks are applied in every step, which is stronger than the threat model used in [4]. The goal of the enchanting attack is to lure the agent into a certain state, and it is not directly minimizing cumulative reward. Since our goal is to reveal the agent\\u2019s true robustness, we always evaluate the agent under the strongest possible attack that reduces agent reward most, so we did not use attacks in [4].\\n[5] proposed \\u201csnooping attack\\u201d, which is a black-box adversarial attack. As suggested by the reviewer, we implemented Snooping attack in [5] and tested it on all our agents. The results are shown in Table 1 (main text) and Table 3 (appendix). From the results, we can see that Snooping attack has strength similar to the MAD attack, and is typically worse than RS attack and the \\u201coptimal\\u201d attack proposed in this paper. Thus, the main conclusion and evaluation results do not change after adding this new attack into comparison.\\n\\n 3.2 Comparison to [6] for robust training under attack:\\nAs suggested by the reviewer, we added a comparison to [6], which uses the critic attack for adversarial training. The results are included in Table 2 (main text) and Table 3 (appendix). We find that this method cannot reliably improve robustness under our suite of strong attacks. The reason is that the critic attack is a relatively weak adversary, so the agent learned with a weak adversary cannot defend against a stronger adversary.\\n\\n4. Sample complexity:\\nWe want to emphasize that just as in supervised learning settings (e.g., image classification), adversarial training can significantly increase computational complexity, e.g., [7] must be trained with more epochs than ordinary training to converge. There is no free lunch for adversarial robustness [8]. Our proposed method usually requires up to 5X more iterations compared to vanilla PPO agents, which is comparable to adversarial training for supervised learning. We believe the computational complexity of our method is reasonable, especially considering that the field of adversarial robustness in DRL is relatively new. Further reducing the computational or sample complexity of adversarial training in RL setting is a good future direction.\\n\\n5. Phrase \\\"training time attack\\\"\\uff1a\\nThank you for pointing out this potential confusion. We have changed \\\"training time attack\\\" to \\\"adversarial training\\\" in our paper and made it clear that we use adversarial attacks on state observations to improve the robustness of the agent during test time.\", \"conclusion\": \"We hope Reviewer 4 can re-evaluate our paper based on our new empirical results (snooping attacks [5] and an adversarial training baseline [6]) and clarifications on our Lemma, Algorithm, and sample complexity. Thank you and we will be glad to answer any additional questions you may have.\", \"references\": \"[1-6]: the same as the references in your review.\\n[7] Madry, Aleksander, et al. \\\"Towards deep learning models resistant to adversarial attacks.\\\" arXiv preprint arXiv:1706.06083 (2017).\\n[8] Schmidt, Ludwig, et al. \\\"Adversarially robust generalization requires more data.\\\" Advances in Neural Information Processing Systems. 2018.\"}",
"{\"title\": \"An interesting framework for robust DRL\", \"review\": \"The authors presents how to learn optimal adversary following the state-adversarial Markov decision process (SA-MDP), and also proposes alternating training with learned attacks (ATLA) framework that trains the optimal adversary online together with the agent to improve the robustness of the DRL agent. Experiment result shows that ATLA outperforms the explicit regularization based methods.\\n\\nOverall I think the paper is well-written and clearly illustrated the methodology. The experimental results are mostly comprehensive. The contribution is clear. I still have a few questions and concerns below:\\n\\n1) It is a natural and interesting idea to use alternating training to optimize both the adversary and the agent online. In terms of finding the (near) optimal adversary following the theory of SA-MDP, the authors argue that because of using \\\"function approximator to learn the agent so it\\u2019s no longer optimal\\\". However, it seems unclear how much the approximation affects the optimality. For tabular case instead of DRL, is it possible to really find the optimal adversary, and if so, how? How far away the learned strong adversary from the real optimal one? Indeed from experiments the learned adversary can better attack than other baselines, but it would still be important to understand the advantage and room to improve in a principled way. For finding the policy, it is understandable that because of the difficulty of solving POMDP, this paper does not solve the optimal policy. \\n\\n2) ATLA-PPO + SA Reg further improves performance over ATLA-PPO in experiments. This seems to suggest that the advantage of ATLA-PPO and SA Reg are complementary. Does the regularization on the function approximator provides additional robustness or it covers the error of function approximator using LSTM? Is it possible to understand this in experiments?\\n\\n3) Are the adversary and the agent both getting stronger over time? The paper only showed final results and did not show the running time result. Hypothetically because of the alternating training, the adversary and the agent should both be improved and it would be interesting to verify this in experiments.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Official Blind Review #2\", \"review\": \"Summary of the paper: The paper studies adversarial attacks in RL, focusing both on the design of optimal attack strategies on RL agents, as well as robust training RL procedures for mitigating attacks. Building on the results of (Zhang et al., 2020), the paper proposes a new learning framework (ATLA), that simultaneously trains a (strong) adversary and a (robust) deep RL agent. The paper showcases the importance of the new framework through extensive experimental evaluation.\", \"reasons_for_score\": \"Overall, I find the paper to be an interesting read and its contribution relevant to the line of work on adversarial attacks in RL. The contributions of the paper seem non-trivial, and include a framework for designing optimal attacks and training procedures that can optimize for robustness. As shown by the experiments, the proposed solution leads to significant increase in performance compared to state of the art baselines. These results complement those of (Zhang et al., 2020). Nonetheless, the presentation of the paper could be improved in terms of clarity. Some parts of the paper could be reorganized and explained in more detail. Suggestion for improvements and questions are outlined below.\", \"clarity\": \"The paper is overall enjoyable to read, but some parts are not clearly/precisely written. There are quite a few typos, some of which might be important for understanding the content. I did not follow equation (2), which seems to contain typos. Could you explain in more detail the loss function defined with this equation? Furthermore, notation in the paragraph before section 3.1 is partly confusing, in particular, $\\\\nu$ seems to be a deterministic function, but then for observation $\\\\hat s$ we have $\\\\hat s \\\\sim \\\\nu(s)$ indicating that $\\\\nu(s)$ might be a distribution from which we sample $\\\\hat s \\\\sim \\\\nu(s)$. I would also suggest reorganizing the content related to Figure 1, which is introduced in section 1, but only explained in detail in section 3.1.\", \"related_work\": \"The related work is generally covered well, but it could be expanded by providing some connection to the line of work that studies policy teaching and policy poisoning attacks in RL. Could you explain how the setting of this paper compares to those studied in this line of work (e.g., Parkes et al. 'Policy Teaching Through Reward Function Learning', Ma et al. 'Policy Poisoning in Batch Reinforcement Learning and Control', Rakhsha et al., 'Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning', etc.)?\", \"experiments\": \"I'm wondering to what extent are the results for different methods in Table 2 comparable. Namely, the methods seem to be based on different architectures, so it is not immediately clear what conclusions should be drawn from these results. Surprisingly, the discussion on page 8 does not seem to compare the results of ATLA-PPO (MLP) and SA-PPO. Could you elaborate more on these results and make relative comparison? Furthermore, in the following sentence: 'For reproducibility, for each setup we train at least 5 (up to 21) agents, attack all of them and report the one with median robustness.' - why is there discrepancy between different setups in the number of trained agents?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A well written and motivated paper that training agents and adversary alternatively\", \"review\": \"The paper is very well written and the considered problem of training an adversary along with the agent is very interesting. Within the proposed concept, the parameterized adversary can be trained by viewing the agent as a part of the environment, so it avoids to access the parameters of the agent policy. From the perspective of the agent, with an unknown adversary, the MDP becomes a POMDP with uncertainty hidden in the adversary, and hence the fact of using LSTM policy is much better for the agent is reasonable. The entire problem is wisely formulated. The experimental settings are well designed and results support the positiveness of the proposed framework.\\n\\nI only have one comment that the SA-MDP can also be understood from another perspective. That is, SA-MDP is actually an asymmetric competitive multi-agent problem, and the alternative training of agent and adversary can be viewed as an instance of self-play. Also, the optimality of SA-MDP for either the agent or the adversary can be explained through multi-agent RL or game theory. It would be interesting if the authors could take a look into such a direction.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"This paper has good insights, but more work is needed\", \"review\": \"Summary:\\nThis paper proposes to improve the robustness of a reinforcement learning agent by alternatively training an agent and an adversary who perturbs the state observations. The learning of an \\u201coptimal\\u201d adversary for a fixed policy is based on the theory of SA-MDP in prior work. The learning of an optimal policy under a fixed adversary is done by solving a POMDP problem. Experimental results show that the proposed alternating training with learned adversaries (ATLA) framework can improve the performance and robustness of PPO.\", \"strengths\": \"1. The paper is well-organized and easy to follow.\\n2. The authors distinguish between the vulnerability of function approximations and the intrinsic weakness of policy, which is interesting and can be useful for the community to investigate the vulnerability of deep RL.\\n3. The experiment results show that both the learned adversary and the trained agent perform well, in terms of attacking and learning respectively. In addition, the proposed LSTM-based policy is shown to be more robust than regular feedforward NNs.\", \"weakness\": \"1. The novelty of this paper is a little limited. (1) The idea of alternative training the agent and the adversary is similar to RARL[1], and Algorithm 2 (ATLA) is similar to Algorithm 1 in [1]. Although RARL focuses on the case where adversary directly changes the environment and ATLA focuses on the observation perturbation attacks, the whole ideas are still similar. (2) The main method of learning an adversary is based on the theoretical work of SA-MDP[2]. \\n2. The authors claim that the proposed adversary is strong since it follows the theoretical framework of SA-MDP. However, in Lemma 1, the adversary reward in SA-MDP, \\\\hat{R}, is defined as the weighted average of -R, while in Algorithm 1, the adversary reward is given by -R itself. It is not clear why such a relaxation still follows the theoretical framework. More details illustrating the approximation and some analysis about the optimality will be appreciated.\\n3. In the experiment section, the authors only compare the proposed algorithm with [2] in terms of \\u201coptimal\\u201d attack and robust training. However, there are a lot of works that attack the observations of a fixed policy [3,4,5]. And more importantly, [6] also proposes to train a robust agent under adversarial attacks. It will be more convincing if the authors empirically or theoretically compare with some of these potential baselines.\\n4.The computational complexity / sample complexity of the proposed ATLA might be problematic, as for each iteration of learning, the adversary needs to solve a new MDP, which makes the proposed robust training less practical to use.\", \"minor_comments\": [\"This paper sometimes uses the phrase \\\"training time attack\\\" to refer adversarial attacks, which is misleading, e.g. the second contribution, the third paragraph of related work. Training-time attack usually refers to poisoning attack, which changes the training dataset and alters the learned policy, different from the scenario in this paper where an adversary wants to fool a fixed policy.\"], \"refs\": \"[1] Pinto, Lerrel, et al. \\\"Robust adversarial reinforcement learning.\\\" arXiv preprint arXiv:1703.02702 (2017).\\n[2] Zhang, Huan, et al. \\\"Robust Deep Reinforcement Learning against Adversarial Perturbations on Observations.\\\" arXiv preprint arXiv:2003.08938 (2020).\\n[3] Huang, Sandy, et al. \\\"Adversarial attacks on neural network policies.\\\" arXiv preprint arXiv:1702.02284 (2017).\\n[4] Lin, Yen-Chen, et al. \\\"Tactics of adversarial attack on deep reinforcement learning agents.\\\" arXiv preprint arXiv:1703.06748 (2017).\\n[5] Inkawhich, Matthew, Yiran Chen, and Hai Li. \\\"Snooping Attacks on Deep Reinforcement Learning.\\\" arXiv preprint arXiv:1905.11832 (2019).\\n[6] Pattanaik, Anay, et al. \\\"Robust deep reinforcement learning with adversarial attacks.\\\" arXiv preprint arXiv:1712.03632 (2017).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
8YFhXYe1Ps | Interpretability Through Invertibility: A Deep Convolutional Network With Ideal Counterfactuals And Isosurfaces | [
"Leon Sixt",
"Martin Schuessler",
"Philipp Weiß",
"Tim Landgraf"
] | Current state of the art computer vision applications rely on highly complex models. Their interpretability is mostly limited to post-hoc methods which are not guaranteed to be faithful to the model. To elucidate a model’s decision, we present a novel interpretable model based on an invertible deep convolutional network. Our model generates meaningful, faithful, and ideal counterfactuals. Using PCA on the classifier’s input, we can also create “isofactuals”– image interpolations with the same outcome but visually meaningful different features. Counter- and isofactuals can be used to identify positive and negative evidence in an image. This can also be visualized with heatmaps. We evaluate our approach against gradient-based attribution methods, which we find to produce meaningless adversarial perturbations. Using our method, we reveal biases in three different datasets. In a human subject experiment, we test whether non-experts find our method useful to spot spurious correlations learned by a model. Our work is a step towards more trustworthy explanations for computer vision. | [
"Interpretable Machine Learning",
"Counterfactuals",
"Computer Vision",
"Human Evaluation",
"User Study"
] | Reject | https://openreview.net/pdf?id=8YFhXYe1Ps | https://openreview.net/forum?id=8YFhXYe1Ps | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"A5Y2Zp-pkh",
"qius7XfZWRf",
"AflV7AWzFQt",
"xO8zcGzcp_6",
"_WsnMRbeLHB",
"2tX2E1p_wa",
"LPM9kMZqeaw",
"TPOe5A1-1d7",
"0ef8tjEhCNy",
"bw-xsqOCWht",
"o7xomokGxe8",
"5sUGratera",
"ZF17IvHJIus",
"d8oBJqnpCFB",
"qRc354yIWoL"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040349364,
1606306160886,
1606306092572,
1606306076244,
1606306046333,
1606305609201,
1606305564663,
1606305397080,
1606305364836,
1606305316860,
1604887972325,
1604100490558,
1603955102347,
1603926239499,
1603801669974
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3736/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3736/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3736/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3736/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3736/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3736/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3736/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3736/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3736/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3736/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3736/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3736/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3736/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3736/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"All the reviewers agree that the paper presents an interesting idea, and the main concern raised by the reviewers was the clarity of the paper. I believe that the authors have improved the presentation of the paper after rebuttal, however, I still believe that the paper woudl require another round of reviews before being ready for publication, in order to properly assess its contributions.\"}",
"{\"title\": \"R3\", \"comment\": \"> In Figure 5(b), it's not clear how real images for baseline condition are sampled. To allow proper comparison, as in counterfactual design, three images should be selected as primary images, the rest images should be sampled based on its minimum distance to the primary images but a different label.\\n\\nWe have received several comments about the baseline from other reviewers too. We improved the description and motivation of the baseline integrating many of the reviewer\\u2019s remarks. Thank you!\", \"we_understand_your_suggestion_as_follows\": \"You suggest to recreate counterfactual interpolation by grouping similar images together for one row but with different logit scores. While this would provide a counterfactual interpolation based on real images, it would likely be disadvantageous for the baseline. Consider that the main pattern in the dataset was that some objects (arms) change their location relative to other objects. If we measure the similarity in pixel space, similarly colored images would be grouped together -- providing a false impression that color does not change and hence is not important for the prediction. Participants may not notice the colour bias this way. A more complex approach based on intermediate features could come with its own challenges and so we decided for the baseline with least bias.\\n\\n>[...]the authors identified principal components that correspond to attributes like gender, smiling.[...]A quantitative analysis is required to demonstrate the dependency of the \\\"attractive\\\" attribute on other attributes. [...]\\n\\nFollowing your comment, we included the correlation coefficients between the \\u201cAttractive\\u201d logit and the other properties in the CelebA dataset.\"}",
"{\"title\": \"R2\", \"comment\": \"> How about non-image datasets?\\n\\nWe discuss other data domains in the conclusion. Our method mainly depends on the availability of an inverse, PCA and the architecture of the network (e.g. where to put the classifier) and could probably be adapted to other domains. \\nMinor\", \"intro\": \"I would suggest using \\u201ctransparency\\u201d rather than \\u201cinterpretability\\u201d when referring to logistic regression (e.g. Lipton, 2016). The interpretability of linear model weights is indeed debatable, as weights will depend on the regularization and signal-to-noise ratio in the data (Haufe et al., 2014).\\nNo clear flow between the different works in the intro. No clear motivation behind counterfactuals.\", \"proofreading\": \"paper is quite hard to follow and minor changes to grammar (e.g. \\u201cTheir similarity is easy to seen\\u201d) makes it more difficult to assess. The quality of the writing deteriorates in sections 3, 4 and 5.\\nIt is unclear what scale delta epsilon represents, and whether we can expect the norm of the different techniques to be comparable.\"}",
"{\"title\": \"R2\", \"comment\": \">I am confused by the section on saliency maps: what does h represent? The activations at an intermediate layer? The motivation is unclear: what are the authors trying to highlight in these \\u201csaliency maps\\u201d? Are these computed attributions or are these L1 distance between activations (in %) between x and x_tilde? Or is it a cosine distance (as suggested by the next sentence mentioning the angle?)\\n\\nWe agree that the mathematical formulation was not clear. The score is based on the dot-product between the change $|\\\\Delta h|$ and feature activation $h$. We have clarified this in the manuscript. \\n\\n> The tasks used for illustration are not described in the text. Examples of y and epsilon should be provided.\\n\\nWe added a respective comment to the manuscript. \\n\\n\\n> Is the technique limited to the model\\u2019s predicted classes?\\n\\nYou could add classifiers, finetune them and then explain them. Or you could also cluster intermediate features and invert them back.\\n\\n> How is \\u201cideal\\u201d counterfactual described and mathematically verified?\\n\\nWe added a definition of ideal counterfactual. Mathematically, there must exist a path from a startpoint $x$ to $\\\\tilde x$ such that the gradient of the path is perfectly aligned with the gradient of the classifier.\\n\\n> The relationship between counterfactuals and e.g. integrated gradients is unclear: [..]\\nWe have improved the description of this section substantially. \\n\\n\\n> What are the participants in the human-based study viewing? Are they comparing the counterfactuals to e.g. SmoothGrad maps, or the saliency as defined per the proposed approach?\\n\\nThe participants were assigned two groups. Each group only saw one explanation technique (our counterfactuals or the baseline). Hence, they are not comparing methods. We reworked the section to amke this clearer.\\n\\n> It is unclear what the participants answered: Figure 5a mentions that the main score is \\u201cstrongly disagree\\u201d for \\u201carms\\u201d (both baseline and interpolation) while the text refers to \\u201cstrongly agree\\u201d. Example questions would help.\\n\\nUnfortunately, the labels were flipped. We apologize and thank you for pointing this out.\\n\\n> The results of the human-grounded study are not very conclusive. Note: please correct for multiple comparisons due to multiple statistical testing of the same effect.\\n\\nWe have added a discussion of our results in the manuscript. Please see the general response to points raised by all reviewers. \\n\\nFollowing your advice, we have now applied the Bonferroni-correction. The results remain unchanged. Thank you for making us aware of this!\\n\\n\\n> Kim et al., 2018 already displayed that human users were performing poorly at identifying a network\\u2019s decision behavior based on saliency maps. A better comparison could have relied on TCAV instead, especially as the concepts can easily be mapped to the features given the synthetic dataset. This could have made a stronger case for the use of invertible networks, especially as Goyash et al (2019) mention the use of counterfactuals based on concepts.\\n\\nWhile it is true that (Kim et al., 2018) show the limitations of saliency maps, our comparison between directional derivative and gradient adds theoretical evidence against gradient-based attribution method. \\n\\nWe decided against using TCAV in our evaluation as we tackle different questions. TCAV requires manually labeled concepts and we provide a method to discover possible concepts worth annotating. Instead of reporting the correlations between the different attributes and the logit score, we could have reported the TCAV scores. To us, the simple correlation seems more straightforward and comprehensible.\\n\\n(Goyal et al., 2019) extends TCAV to estimate the causal effect of concepts. We do report a similar causal effect score as Goyal for the Two4Two dataset, where we control the data generation process. The main advantage of our model is that it uses the same model to classify and generate the explanations. While we could have implemented the work by Goyal for invertible neural networks, the potential insights would be a comparison between VAEs and invertible neural networks.\"}",
"{\"title\": \"R2\", \"comment\": \"R2: Interesting idea needing more work\\n> the (lack of) clarity of the text.\\n> the assessment of the technique, as the results of the human-grounded evaluation are mixed, with users not being significantly more accurate in finding confounding factors compared to a baseline technique.\\n\\nThis is correct. However, please also take into your consideration that the study still demonstrates the usefulness of our method as well as some of its shortcomings. In contrast to many other evaluations, which do not even consider baselines (e.g. Ribeiro et al. (2016); Singla et al.(2020)) or don\\u2019t even evaluate their methods with human subjects at all, we made an effort to create a simple but strong baseline, taking into consideration findings from HCI about usability issues of explanation techniques. \\n\\nWe would like to ask reviewers to consider the rigor put into the study design and the identification of a good baseline technique as additional contributions of our paper. After all, this might inspire more human evaluations in the machine learning community, if they are deemed valuable. \\n\\n> the limitations of the technique, [...] should be mentioned.\\n\\nWe mentioned the added computation costs in our submitted version and also the challenge and possible approaches to apply invertible networks to RNN or GraphNN. We now also state clearer that our method requires us to use a custom network architecture. Since labeling data isn\\u2019t a requirement specific to our model, but affects rather all architectures we have not mentioned this point. \\n\\n> Novelty, The \\u201cRelated works\\u201d is rather limited,\\n\\nWe agree and rewrote the related work section and commented on the similarities and differences in greater detail.\\n\\n\\n> [Rigor] I found the qualitative evaluation on the 3 datasets unconvincing [...]\\n\\nWe agree that possible conclusions might have reached using other techniques. In our user-study, we show that users can reach similar conclusions using a simple baseline. We now state in the related work section that GAN based counterfactuals could probably create similar looking images. However, they cannot guarantee that the explanations are faithful to the model. \\n\\n> [Rigor] While I was most interested by the discussion around the generation of counterfactuals based on the invertible network compared to based on the integration of gradients, I wished there was a definition of an \\u201cideal\\u201d counterfactual, qualitative or (preferably) quantitative. The single example provided in the main text is appealing but this requires more evidence to me.\\nWe agree and have now included such a definition at the beginning of section 2.\\n\\nWe agree that saliency maps can be inconclusive, no matter what method has been used to generate them. There are more and more studies being published showing evidence for that. We have now included a small summary of such findings in our paper along with the remark that the same useability concern applies to our saliency maps as well. Regardless, our saliency maps are faithful to the model, which is an important and unique contribution in comparison to other methods. A saliency map of a counterfactual highlighting the entire face is still correct as many labels (gender, ethnicity, age, attractiveness) impact the whole face.\\n\\n> Counterfactuals: their quality seems subject to appreciation and confirmation bias, especially on potentially cherry picked examples. \\n\\nThe quality of counterfactuals is evaluated in the users study, where they were picked randomly not manually. Most examples in the paper are based on the principal components of the dataset (see Figure 1b) and therefore not cherry picked. We also show a number of random samples along principal components in the Appendix. \\n\\n\\n> There should be more details about the Two4Two dataset and its motivations, as well as how it relates to other datasets (e.g. Goyal et al., 2019)\\n\\nWe provide an additional description of the Two4Two dataset in the appendix and relate it to other datasets, e.g. the BAM dataset.\\n\\n> How does the proposed approach relate to \\u201ccompleteness\\u201d (Sundararajan et al., 2017)?\\n\\nCompleteness requires that the sum of the attribution map equals the difference of the logit score between the image and baseline. Our saliency maps do not fulfill completeness as defined by (Sundararajan et al., 2017). We do look at the differences between counterfactual examples and they can be arbitrarily large as \\u03c6 can stretch the space. As a side note: we believe that \\u201cCompleteness\\u201d fails to account for the non-linear stretching of neural networks and is not a property to require (but this is a different discussion).\\n\\n> What is the mathematical justification to resize the saliency map [...]\\nAs activations remain localized in a convolutional network, this operation is valid. The lower resolutional feature map still matches the feature locations well. Grad-CAM for example does the rescaling even using the last convolutional feature map.\"}",
"{\"title\": \"General Rebuttal Answer\", \"comment\": \"We want to thank the reviewer for their time, effort and detailed feedback. Their comments helped to significantly improve this paper. We address each reviewer\\u2019s comment in-line and provide an overview here.\\n\\nWe generally found the reviews to be a fair assessment of our paper. The reviewers pointed out strengths of our paper such as the novelty (R2), combining a generative and discriminative model (R3), the comparison of the directional derivative dx/dw to the gradient (R2), directional derivatives for constructing counterfactuals (R4), using PCA to create isosurfaces (R4) and conducting a rigorous user study (R2, R4).\\n\\nThe main point of criticism was the quality of the presentation. The manuscript lacked clarity and was oftentimes confusing. Some reviewers raised concerns about the design and the results of our user study.\\n\\nBased on the reviewers' feedback, we rewrote large parts of the paper. We now state our motivations and contributions clearer. We restructured the method section, adding clearer definitions for counterfactuals and ideal counterfactuals. We moved the comparison of the gradient with the directional derivative into the evaluation. For each dataset, we provide a justification of why we used it and how it fits into the overall evaluation of our method. Some parts were moved to the appendix to increase focus on the most important aspects and reduce clutter. Additionally, improved the related work section mentioning how related methods differ from our approach. We think this improved clarity throughout the paper. \\n\\nWith respect to the evaluation, we added an analysis to the celebA section where we now report results of correlations that confirm different hypotheses. Regarding the user study, we found and fixed a severe mistake in a figure that was probably one cause of confusion, heartfelt apologies! Some reviewers criticised that our methods could not beat the baseline. While this is true, we want to emphasize that 1) conducting a user study itself is a valuable contribution that most papers in the field are still hesitant conducting, 2) both methods (our conterfactual and the baseline) have provided users with sufficient information to discover relevant and irrelevant features. This just indicates that our baseline was strong and we discuss this result and it\\u2019s potential implications for interpretability research in the conclusions of our manuscript. \\n\\nWe hope our substantial rework finds your liking, some additional results will find their way into the manuscript until the camera-ready version is due. If you agree that these changes improved our manuscript, we would be grateful for an increased rating.\"}",
"{\"title\": \"R3\", \"comment\": \"R3: interesting idea but the execution and writing left a lot to be desired, seems not proof-read!\\n> The paper an interesting and potentially important idea. But at times, the text is difficult to read.\\n\\nWe agree with your criticism and reworked the entire manuscript substantially.\\n\\n> The conclusion from Figure 2 is not clear. [...]\\n\\nWe removed these remarks from the figure caption. Instead, we discuss them in detail in the CelebA evaluation. \\n\\n\\n> In Figure 2 and Figure 3(b), a label showing the different principal components considered in each column, and the logit/prediction of the classifier for each row will improve the figure's readability.\\n\\nWe have adopted your advice, thank you!\\n\\n> The saliency maps in Figure 3(a) highlight almost the entire face; hence they are inconclusive. \\n\\nWe agree that saliency maps can be inconclusive, no matter what method has been used to generate them. There are more and more studies being published showing evidence for that. We have now included a small summary of such findings in our paper along with the remark that the same useability concern applies to our saliency maps as well. Regardless, our saliency maps are faithful to the model, which is an important and unique contribution in comparison to other methods. A saliency map of a counterfactual highlighting the entire face is still correct as many labels (gender, ethnicity, age, attractiveness) impact the whole face. \\n\\n\\n> Figure 4(a) shows the final results [...] To understand the results, it would be helpful to show some examples over which the integration took place. \\n\\nFigure 4a (now 3b) shows the original image on the left. We have not included intermediate integration steps as they basically show interpolations between the original and the final image. We, however, provide our code so an interested reader can investigate these steps. \\n\\n\\n> An example of a random sample, with minimal/no changes along the independent factors for the proposed method, [...]\\n\\nWe provide examples with no change along the independent factors are given by the counterfactual interpolation. The astronaut integrated along the directional derivative in the old Figure 4a is an example with minimal change along the independent factors.\\n\\n\\n> The number reported at the end of section 3 on \\\"the gradient of x and the directional derivative dx/dw\\\" should be reported in a table [...]\\n\\nYou are right, we should have reported those numbers in a table. We, however, decided to exclude the self-similarity comparison between the gradient and the directional derivative to focus on the other results.\\n\\n> The directional derivative dx/dw, in the model, is w.r.t the weight vector of the binary classifier, trained to identify the label. Related work by Kim et al. (2018)[...]also used directional derivatives w.r.t to binary classifiers, trained to identify a human-defined concept. A comparison with this method will help the reader understand the different applications of directional derivatives and how directional derivatives can be used without an invertible network. \\n\\nWe have added a short comment on the relationship with (and the main difference to) TCAV - which does not compute the derivative on an invertible neural network.\\n\\n\\n\\n> Table 1 doesn't report the results for the supervised method for celebA and tow4two datasets. \\n\\nWe have now included these numbers in the updated version of the manuscript.\\n\\n> The numbers reported in Table 2 lacks a coherent conclusion. The corr. data and corr. change columns have values in similar ranges. Please elaborate and discuss the results. \\n\\nWe extended the figures caption to discuss the results.\"}",
"{\"title\": \"Answer R4\", \"comment\": \"R4: Use of invertible CNNs to construct counterfactuals and isosurfaces\\n> The reviewer finds the manuscript hard to follow [...]\\n\\nWe agree with your criticism and reworked the entire manuscript substantially.\\n\\n\\n> The descriptions about saliency maps are less relevant to the main idea [...]\\n\\nWe agree that the description had ample room for improvement and we invested much time in focusing the text for improved clarity (we hope you agree). \\n\\n> The comparison between simple gradient and direction derivative is less fair, as the directional derivative makes use of the very information direction [...]\\n\\nWe compare all methods on the same model running the same integration. The gradient and the directional derivative d\\\\phi^{-1}/dw make both use of the direction w, as you can write both as the Jacobi Matrix J * w and J^{-1} w. The reason for their different results is that J^{-1} is suited to translate w to image space and J is not. We therefore think the comparison is fair. However, we have rewritten the description of the method (sec. 3) and hope this improved clarity.\\nIf you visualize \\\\phi^{-1}(\\\\phi(x) + a w) directly, you will get the same result as when integrating d\\\\phi^{-1}/dw over the respective length.\\n\\n> The human study may need to conduct another set of control experiments to show that only original training images (not counterfactual interpolations) are helpful for uses to identify CNN patterns and biases. [...]\", \"we_have_implemented_this_in_the_baseline_condition\": \"users are presented with images from the validation set, sorted by their logits. Since the study was a between-group design with each group only seeing one explanation technique, we can isolate the effect of studying only images and no interpolation. We concluded that studying examples this way is indeed helpful, but also does not allow more than half of the participants to detect the subtle rotation bias. This rejects the theory that users can find shortcuts easily on this dataset. While it is still a relatively simple dataset, it still allowed us to create user tasks that are not trivial. The writing in that section was very condensed and lacked clarity. We improved the text and have integrated your comment in the new version.\\n\\n> [Other minor comments] Figure 1: There is no explanation for (a). What is w The reader may not understand it for the first reading.\\n\\nThanks for pointing this out, we have improved the caption. \\n\\n> [Other minor comments] Figure 4: The reviewer believes normalized scores on the top of the images make better sense.\\n\\nDo you mean not the raw logit value but rather the probability? We think the logit value provides a better summary as a change from 5 to 50 corresponds to a probability from 0.993 to 1-1e-22 which can be better understood as logit.\"}",
"{\"title\": \"Answer R1\", \"comment\": \"> The writing often lacks clarity and the usage of space can be more judicious. [...]\\n\\nWe have substantially reworked the writing for better balance. We have expanded the methods and rewritten the isosurface section to improve clarity.\\n> Focusing on elaborating the method, and maybe one less dataset would improve readability by moving extra evaluation to appendix. [...]\\n\\nWe followed your advice and moved large parts of the mice evaluation to the appendix.\\n> The results of the human subject study are not very convincing. [...]\\n\\nUnfortunately, the labels in the user study figure were swapped in the original version of the manuscript which may have led to the impression that participants find a lot of irrelevant patterns. We apologize and hope that the corrected figure and improved description clarifies our findings. \\n\\n> The appendix (Fig. 9) also shows that the subjects thought both the proposed method and the baseline were equally good.[...]\\n\\nWe improved the layout of the figures and corrected the mistakes you mentioned. Regarding the results of the user study, both methods received comparable ratings. However, both ratings were considerably high and show that we were successful in designing both a usable counterfactual generation method and a baseline that users found useful as well. We discussed the implications of this result in the conclusions of the updated manuscript. On a sidenote: we preregistered our experiment and reported a standardised subjective ratings scale. It would be great if this (in ML rather non-standard) practice would be adopted in interpretability research because it shows that we care about rigorous evaluation with those that are supposed to use explanations: the users.\\n\\n> Minor formatting issues: mismatched quotes throughout (striped, zebra, etc.),[...]\\n\\nThank you, we fixed all of them. \\n\\n> Overall, the paper has some good ideas and interesting analysis[...]I am marginally inclined for it to be accepted.\\n\\nThank you. We agree with your criticism and reworked the entire manuscript substantially. We hope you agree this merits an updated rating :)\"}",
"{\"title\": \"Answer R5\", \"comment\": \"> 2. I found the structure of the paper confusing and lacking in clear elicitation of contributions. \\n\\nWe have substantially restructured the entire manuscript. The contributions are clearly stated at the end of the introduction. We moved Sec 2 to the end of the evaluation and provide appropriate motivation for it.\\n> In here [Sec 3], its unclear why changes in independent principal components while generating counterfactuals not desirable especially if as the authors suggest, the prediction changes. Even if the changes are not observable to humans. This is a highly unusual aspect of the counterfactuals where the counterfactuals shouldn't just explain large changes to logits but changes to predictions as well.\\n\\nWe agree with the definition put forward in (Wachter et. al, 2018) which has the\\u201cclosest possible world\\u201d requirement which states that counterfactuals should not change unrelated properties. If they do, it would be hard to tell which change was responsible for the change in prediction. Small and invisible changes do not provide the user with information.\\n\\n\\n> 3. The objective of each evaluation data-set and study should also be clearly outlined before proceeding to the details. Currently the paper leaves the reader to figure out the main contributions at the cost of hampering the paper's technical significance.\\n\\nWe have restructured this part of the manuscript and clarified this in the updated version of the manuscript.\\n> What is the goal of the user study? Why is the baseline any other method of generating counterfactuals but merely conditioned examples? \\nWe have now stated the goal in the first paragraph in the corresponding section. We also added more detail about our reasoning for not considering other counterfactual methods or saliency maps and discussed this choice. \\n\\n> I also strongly recommend the authors to move away from evaluating their models against celebA labels such as \\\"Attractive\\\" which are rife with ethical concerns. I understand that those are the only options available in celebA but I would recommend using more neutral class labels for experiments if face dataset is an important part of their evaluations.\\n\\nWe share these ethical concerns. We chose celebA exactly because it was already criticised due to its many shortcomings. We expected to be able to confirm and investigate biases that we knew existed and to emphasize the advantages of our method. We now state ethical concerns in the manuscript explicitly.\"}",
"{\"title\": \"While technically incremental, the work provides interesting method to generate multiple kinds of explanations counterfactuals, saliency maps and what is called as \\\"isosurface\\\" of the classifer. Interesting case-study and user study demonstrate potential benefit of the method comparing two types of explanations\", \"review\": \"1. The authors propose a method to derive counterfactuals, saliency maps, and so called isofactuals using invertible neural networks. The quality of the generated explanations are compared visually with existing baselines.\\n\\n2. I found the structure of the paper confusing and lacking in clear elicitation of contributions. For example, once explanation methods are provided in Sec 2, the motivation of Section 3 to purely compare gradient methods is highly unstructured. In here, its unclear why changes in independent principal components while generating counterfactuals not desirable especially if as the authors suggest, the prediction changes. Even if the changes are not observable to humans. This is a highly unusual aspect of the counterfactuals where the counterfactuals shouldn't just explain large changes to logits but changes to predictions as well. \\n\\n3. The objective of each evaluation data-set and study should also be clearly outlined before proceeding to the details. Currently the paper leaves the reader to figure out the main contributions at the cost of hampering the paper's technical significance. \\n\\n4. What is the goal of the user study? Why is the baseline any other method of generating counterfactuals but merely conditioned examples? \\n\\nI strongly suggest restructuring the paper to fix above concerns and provide a clear justification for their experimental setup.\\n\\n5. I also strongly recommend the authors to move away from evaluating their models against celebA labels such as \\\"Attractive\\\" which are rife with ethical concerns. I understand that those are the only options available in celebA but I would recommend using more neutral class labels for experiments if face dataset is an important part of their evaluations.\\n\\n--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------\\nThe authors have done a reasonable job at addressing my concerns and I have increased my score from 5 to 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Relevant. Lacks clarity. Mildly convincing results.\", \"review\": \"In this paper, the authors propose a method for generating counterfactuals (visually \\u201csimilar\\u201d examples with different labels) and isofactuals (visually \\u201cdifferent\\u201d examples with the same label) using an invertible convolutional network. A human study shows that providing these counterfactual and isofactual images in a systematic way can help participants understand a model\\u2019s bias better.\\n\\nInterpretability of machine learning models is becoming progressively more important as these models continue to proliferate in sensitive applications such as medicine, finance, law, etc. A large body of interpretability efforts is around post hoc methods where an explanation is generated given certain probes into a blackbox function. On the other hand, the proposed model imposes certain constraints on the model to yield these explanations. The paper has some interesting ideas, and the qualitative evaluation is helpful is conveying those ideas. The discussion around gradient wrt the input image vs directional derivative is fairly insightful with convincing qualitative and quantitative results (Fig. 4). However, I did have some conerns:\\n\\n1. The writing often lacks clarity and the usage of space can be more judicious. For example, the description of the main method is severely lacking, and is lumped into a few short paragraphs (section 2). I needed to reread the section a few times to understand the gist since important details are either missing or relegated to the appendix. I am still unsure about the development of isosurface section. On the other hand, excessive details are present in the evaluation section, e.g. in-depth discussion of mice characteristics. Focusing on elaborating the method, and maybe one less dataset would improve readability by moving extra evaluation to appendix. Something like, \\u201cwe observe similar patterns with other tested datasets, which are presented in the appendix.\\u201d\\n\\n2. The results of the human subject study are not very convincing. While the subjects were able to better detect model\\u2019s biases with the systematic presentation using the proposed method, they also spuriously discovered irrelevant patterns (background and blocks). The appendix (Fig. 9) also shows that the subjects thought both the proposed method and the baseline were equally good. [Digression: Fig. 9 is poorly processed with missing words, repeated legend, shuffled axes, etc.).\\n\\n3. Minor formatting issues: mismatched quotes throughout (striped, zebra, etc.), Fig. 4 caption (there not ideal), \\u201cdifferent to the original\\u201d, etc.\\n\\nOverall, the paper has some good ideas and interesting analysis but falls short on clarity and fully convincing the reader about the results. I am marginally inclined for it to be accepted.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Use of invertible CNNs to construct counterfactuals and isosurfaces\", \"review\": \"This paper describes a computational method to construct ideal counterfactuals and isosurfaces via invertible CNNs, and uses it to reveal biases in three different datasets.\", \"strengths\": \"1. The use of directional derivative to construct ideal counterfactuals is interesting.\\n2. Leveraging PCA to construct isosurfaces is neat.\\n3. The human study is a plus, where the stimuli are based on counterfactual interpolations created by the proposed method.\", \"weaknesses\": \"1. The reviewer finds the manuscript hard to follow, especially Section II. The authors may come up with a clearer presentation.\\n2. The descriptions about saliency maps are less relevant to the main idea, further confounding the reviewer.\\n3. The comparison between simple gradient and direction derivative is less fair, as the directional derivative makes use of the very information direction w (e.g., the direction of no sunglass -> sunglass). What happens if we visualize $\\\\phi^{-1}(\\\\phi(x)+ \\\\alpha w)$ directly, for different values of $\\\\alpha$.\\n4. The human study may need to conduct another set of control experiments to show that only original training images (not counterfactual interpolations) are $\\\\textbf{less}$ helpful for uses to identify CNN patterns and biases. The reviewer conjectures that for this simple TWO2TWO data, the subjects may spot shortcuts easily even using original training images.\", \"other_minor_comments\": \"1. Figure 1: There is no explanation for (a). What is $w$? The reader may not understand it for the first reading.\\n2. Figure 4: The reviewer believes normalized scores on the top of the images make better sense.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"interesting idea but the execution and writing left a lot to be desired, seems not proof-read!\", \"review\": \"Summary: The paper presents a promising idea to build interpretable models by combining discriminative and generative approach. The proposed model uses an invertible neural network to model the data distribution. The invertibility helps in transforming the learned feature vector back to the image domain. A linear discriminative classifier is trained on the feature vector to perform binary classification. Using the inverse function, the model generates a counterfactual explanation by inverting a modified logit score to create a new image as an explanation. The authors further construct an orthogonal basis using PCA, such that modifying feature vector in those directions results in no change in the classifier's prediction. Decomposing the feature space into such a basis helps discover potential biases in the dataset and the classification model. The experiments compare the proposed method's performance with fully discriminative models and post-hoc interpretability methods such as gradient-based saliency maps.\\n \\nMajor \\n-----------------\\n\\n\\u2022\\tThe paper an interesting and potentially important idea. But at times, the text is difficult to read. \\n\\u2022\\tThe conclusion from Figure 2 is not clear. From the caption and the figure, it is not clear which attributes such as smiling; gender are important for the classifier's positive /negative attractive decision. \\n\\u2022\\tIn Figure 2 and Figure 3(b), a label showing the different principal components considered in each column, and the logit/prediction of the classifier for each row will improve the figure's readability. \\n\\u2022\\tSaliency maps highlight important regions of an image for the prediction decision. The saliency maps in Figure 3(a) highlight almost the entire face; hence they are inconclusive. \\n\\u2022\\tFigure 4(a) shows the final results after integrating the original image along with different derivatives. To understand the results, it would be helpful to show some examples over which the integration took place. \\n\\u2022\\tAn example of a random sample, with minimal/no changes along the independent factors for the proposed method, as compared to positive changes by other methods, will help in understanding the results in Figure 4(b). \\n\\u2022\\tThe number reported at the end of section 3 on \\\"the gradient of x and the directional derivative dx/dw\\\" should be reported in a table to allow a proper comparison between different methods. \\n\\u2022\\tThe directional derivative dx/dw, in the model, is w.r.t the weight vector of the binary classifier, trained to identify the label. Related work by Kim et al. (2018), \\\"Interpretability beyond feature attribution: Quantitative testing with concept activation vectors\\\", also used directional derivatives w.r.t to binary classifiers, trained to identify a human-defined concept. A comparison with this method will help the reader understand the different applications of directional derivatives and how directional derivatives can be used without an invertible network. \\n\\u2022\\tTable 1 doesn't report the results for the supervised method for celebA and tow4two datasets. \\n\\u2022\\tThe numbers reported in Table 2 lacks a coherent conclusion. The corr. data and corr. change columns have values in similar ranges. Please elaborate and discuss the results. \\n\\u2022\\tIn Figure 5(b), it's not clear how real images for baseline condition are sampled. To allow proper comparison, as in counterfactual design, three images should be selected as primary images, the rest images should be sampled based on its minimum distance to the primary images but a different label. \\n\\u2022\\tIn the Evaluation section, for celebA, the authors identified principal components that correspond to attributes like gender, smiling. The results are reported in a qualitative manner, with inferences draw by showing a few examples only. A quantitative analysis is required to demonstrate the dependency of the \\\"attractive\\\" attribute on other attributes. The results shown in Figure 6 of the appendix don't have a thorough caption to illustrate the findings.\", \"minor\": \"------------\\n\\n\\u2022\\tThe caption for Figure 1 has typos. \\n\\u2022\\tIn Figure 4(a), the caption doesn't describe the top label. \\n\\u2022\\tThe opening quotation marks are inverted throughout the text. \\n\\u2022\\tTable 1 and Table 2 are shown after the references. They should be placed with the main text before the references. \\n\\u2022\\tThe label of Figure 5(b) has a very small font.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting idea needing more work\", \"review\": \"Update after revision\\n------------------------------\\nI thank the authors for their work on this paper. The second reading was more pleasant. I agree with the authors that performing a user-study is an important effort, that should be encouraged. I however still believe that, if not benefitial to the user, the complexity of the method can be a drawback. I also wished that more comparisons, but especially other data modalities were investigated. I have updated my rating to reflect the improvement in the text.\\n\\nShort summary\\n-----------------------\\nThe authors propose a technique based on an invertible network to provide counterfactuals relative to one class of interest. The counterfactuals can be interpolated across an isosurface, displaying parameters which do not affect the model\\u2019s decision. The authors propose an attribution map based on those counterfactuals and evaluate counterfactuals in a qualitative manner, based on their own observations on 3 datasets, as well as based on a human-grounded evaluation on a synthetic dataset. \\n\\nStrengths\\n---------------\\nThe use of an invertible dataset is rather novel in the field of explainability, and the relationship between the obtained counterfactuals and gradient-based interpolation methods is interesting. The human-grounded evaluation is definitely a large undertaking that is not often performed to assess the usefulness of interpretability techniques.\\n\\nWeaknesses\\n-------------------\", \"i_have_identified_several_weaknesses_of_the_work_that_justify_my_recommendation\": \"- the (lack of) clarity of the text.\\n- the assessment of the technique, as the results of the human-grounded evaluation are mixed, with users not being significantly more accurate in finding confounding factors compared to a baseline technique.\\n- the limitations of the technique, not discussed in depth. For instance, I can see difficulties in evaluating the effect of classes that are not present as \\u201ctraining classes\\u201d in the dataset, which requires a large labeling effort. In addition, how the technique would transpose to non-image datasets, or whether there are limitations in the invertible architectures to consider should be mentioned.\\n\\nNovelty\\n-----------\\nThe \\u201cRelated works\\u201d section is rather limited, which makes it difficult to evaluate. In general, the use of invertible networks as interpretable networks is novel.\\n\\nClarity\\n---------\", \"clarity_was_a_major_weakness_of_this_work_for_me\": [\"the datasets are illustrated in figures but not mentioned until much later\", \"the maths are described in sections that seem unrelated to each other, without depicting the relationships between the different steps\", \"multiple concepts are unclear (see detailed comments)\", \"the motivations are not clearly explained\", \"Rigor\", \"--------\", \"I found the qualitative evaluation on the 3 datasets unconvincing, as it is unclear whether the same conclusions could not have been reached using other techniques.\", \"While I was most interested by the discussion around the generation of counterfactuals based on the invertible network compared to based on the integration of gradients, I wished there was a definition of an \\u201cideal\\u201d counterfactual, qualitative or (preferably) quantitative. The single example provided in the main text is appealing but this requires more evidence to me.\", \"Finally, the \\u201csaliency\\u201d maps defined in this work do not seem to be used later on in the work. I doubt that looking at them would improve human evaluation of a model\\u2019s behavior.\", \"Detailed comments\", \"-----------------------------\", \"Counterfactuals: their quality seems subject to appreciation and confirmation bias, especially on potentially cherry picked examples. To assess their quality, I would suggest to use the BAM dataset (Yang and Kim, 2019, https://github.com/google-research-datasets/bam) which was generated to benchmark attribution methods. I would overall suggest the use of this dataset for assessing the faithfulness (sensitivity, specificity) of the proposed approach.\", \"The choice of the mice dataset should be justified as this doesn\\u2019t seem like an obvious choice to assess the quality of attribution techniques. It is quite difficult to estimate any effect, and feels like qualitative evaluation is biased by the authors\\u2019 remarks given the lack of knowledge of the problem.\", \"There should be more details about the Two4Two dataset and its motivations, as well as how it relates to other datasets (e.g. Goyal et al., 2019)\", \"How does the proposed approach relate to \\u201ccompleteness\\u201d (Sundararajan et al., 2017)?\", \"What is the mathematical justification to resize the saliency map of an intermediate layer to the input resolution? Is there a citation for this process showing that this is a reasonable assumption?\", \"I am confused by the section on saliency maps: what does h represent? The activations at an intermediate layer? The motivation is unclear: what are the authors trying to highlight in these \\u201csaliency maps\\u201d? Are these computed attributions or are these L1 distance between activations (in %) between x and x_tilde? Or is it a cosine distance (as suggested by the next sentence mentioning the angle?)\", \"The tasks used for illustration are not described in the text. Examples of y and epsilon should be provided.\", \"Is the technique limited to the model\\u2019s predicted classes?\", \"How is \\u201cideal\\u201d counterfactual described and mathematically verified?\", \"The relationship between counterfactuals and e.g. integrated gradients is unclear: the first clearly needs a model that can generate data, while the latter integrates the gradients between a baseline (defined by the user) and the input. More details and explanations are required to make this relationship clearer.\", \"What are the participants in the human-based study viewing? Are they comparing the counterfactuals to e.g. SmoothGrad maps, or the saliency as defined per the proposed approach?\", \"It is unclear what the participants answered: Figure 5a mentions that the main score is \\u201cstrongly disagree\\u201d for \\u201carms\\u201d (both baseline and interpolation) while the text refers to \\u201cstrongly agree\\u201d. Example questions would help.\", \"The results of the human-grounded study are not very conclusive. Note: please correct for multiple comparisons due to multiple statistical testing of the same effect.\", \"Kim et al., 2018 already displayed that human users were performing poorly at identifying a network\\u2019s decision behavior based on saliency maps. A better comparison could have relied on TCAV instead, especially as the concepts can easily be mapped to the features given the synthetic dataset. This could have made a stronger case for the use of invertible networks, especially as Goyash et al (2019) mention the use of counterfactuals based on concepts.\", \"How about non-image datasets?\", \"Minor\", \"-------\", \"Intro: I would suggest using \\u201ctransparency\\u201d rather than \\u201cinterpretability\\u201d when referring to logistic regression (e.g. Lipton, 2016). The interpretability of linear model weights is indeed debatable, as weights will depend on the regularization and signal-to-noise ratio in the data (Haufe et al., 2014).\", \"No clear flow between the different works in the intro. No clear motivation behind counterfactuals.\", \"proofreading: paper is quite hard to follow and minor changes to grammar (e.g. \\u201cTheir similarity is easy to seen\\u201d) makes it more difficult to assess. The quality of the writing deteriorates in sections 3, 4 and 5.\", \"It is unclear what scale delta epsilon represents, and whether we can expect the norm of the different techniques to be comparable.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
Ao2-JgYxuQf | Active Tuning | [
"Sebastian Otte",
"Matthias Karlbauer",
"Martin V. Butz"
] | We introduce Active Tuning, a novel paradigm for optimizing the internal dynamics of recurrent neural networks (RNNs) on the fly. In contrast to the conventional sequence-to-sequence mapping scheme, Active Tuning decouples the RNN's recurrent neural activities from the input stream, using the unfolding temporal gradient signal to tune the internal dynamics into the data stream. As a consequence, the model output depends only on its internal hidden dynamics and the closed-loop feedback of its own predictions; its hidden state is continuously adapted by means of the temporal gradient resulting from backpropagating the discrepancy between the signal observations and the model outputs through time. In this way, Active Tuning infers the signal actively but indirectly based on the originally learned temporal patterns, fitting the most plausible hidden state sequence into the observations. We demonstrate the effectiveness of Active Tuning on several time series prediction benchmarks, including multiple super-imposed sine waves, a chaotic double pendulum, and spatiotemporal wave dynamics. Active Tuning consistently improves the robustness, accuracy, and generalization abilities of all evaluated models. Moreover, networks trained for signal prediction and denoising can be successfully applied to a much larger range of noise conditions with the help of Active Tuning. Thus, given a capable time series predictor, Active Tuning enhances its online signal filtering, denoising, and reconstruction abilities without the need for additional training. | [
"Signal Filtering",
"Recurrent Neural Network",
"Time Series",
"Denoising",
"Temporal Gradients"
] | Reject | https://openreview.net/pdf?id=Ao2-JgYxuQf | https://openreview.net/forum?id=Ao2-JgYxuQf | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"uGyAa9aqSv5",
"NEYZ57JXPTw",
"Fhwy3LHaFZQ",
"u5a-1EUfxFJ",
"zqwd9ItiONM",
"bSSV8bTbu-n",
"7t7g6Xk4Og_",
"fhawELqBPqq",
"Urp6dKu-zXu"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040427103,
1606167108391,
1605716748427,
1605716619772,
1605716548407,
1605716438348,
1603924140900,
1603921494837,
1603899116397
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3735/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3735/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3735/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3735/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3735/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3735/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3735/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3735/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper introduces a method to estimate dynamics parameters in recurrent structured models during the learning process. All three reviewers agreed that the idea is interesting and the proposed method could be potentially useful. However, two of the three reviewers have a serious concern about the lack of comparison with other approaches. I agree with these two reviewers; due to the lack of discussion and comparison with existing studies, I cannot recommend accepting this submission in its current form.\"}",
"{\"title\": \"On Revision 23 Nov 2020\", \"comment\": \"To complete the comparison with TCN and missing data values, we have just uploaded another paper revision.\\n\\nThis revision includes now TCN results for all three problem domains considered. While TCN partially outperforms the respective RNNs when trained on similar denoising levels, Active Tuning applied to a noise-unaware RNN outperforms the noise-unaware TCN. In the conclusions we now also mention the potential to add Active Tuning to TCNs and other feed-forward ANNs. \\n\\nAdditionally, we also added the dropout experimental results to the pendulum data.\\n\\nMoreover, for the wave experiments, we added another illustrative visualization of the superior performance of Active Tuning in the DISTANA network.\\n\\nIn the light of these new result additions, we have also adapted the conclusions slightly further. \\n\\nThank you to all three reviewers for taking the time to reconsider our paper and your consideration and time in general. \\n\\nSincerely yours,\\nthe authors.\"}",
"{\"title\": \"Response to Reviewer 3 (4)\", \"comment\": [\"Dear Reviewer 3,\", \"Thank you very much for your positive feedback and your recommendations. Please note that we applied DISTANA only for predicting the Wave Dynamics. In the other two problem cases, standard LSTMs were applied, underlining the general applicability of Active Tuning.\", \"Yes, we clearly see your demand for an algorithmic description as justified. We added Algorithm 1 including some further explanations in the text and hope this complements the method section satisfactorily.\", \"We are not entirely sure about what you mean with bias concerning the data generation. But, of course, we tried to produced datasets that were as unbiased as possible, randomizing the parameters of the generation process accordingly.\", \"We actually did vary the tuning length and tuning cycles across the three different problem domains and noise levels (albeit not very much). The particular choices can be found in Table 6, 7, and 8 in Section A.2 of the appendix.\"]}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": [\"Dear Reviewer 2\", \"Thank you for your constructive feedback on our work.\", \"By including an algorithmic explanation of our method (see Algorithm 1 in the revised paper), we hope to both increase the comprehensibility of our paradigm's description and remove any uncertainties on implementation details.\", \"Your statement about the potential of Active Tuning outperforming other models raised our curiosity. Hence, we incorporated an experiment with a state-of-the-art sequence-to-sequence model, namely, a temporal convolution network (TCN). Indeed, as reported in the modified Table 4 (and Table 5 in the appendix) of our revised paper, we can verify your expectations and have consistently observed that the TCN is outperformed by Active Tuning. However, we would like to again highlight the fact that we did not aim at beating the most sophisticated state-of-the-art ANN in particular domains. Rather, Active Tuning has the potential to generally enhance the performance of prediction models, and particularly recurrent temporal prediction models, without further training when facing missing values and noisy data.\", \"\\\"Same comparisons might be interesting for tuning weights instead of hidden states.\\\". This is clearly a very interesting idea, which we hope to elaborate on in future work. What we investigated so far is tuning the cell states, the \\\"hidden states\\\" (unit outputs), and even the signal itself. What we found is that it did not really make a huge difference, which of the mentioned parts are optimized. Typically, tuning just the hidden units outputs works best, but only with a small margin (at least for LSTMs). When tuning the weights catastrophic forgetting might become an issue, which Active Tuning fully avoids, because the model parameters (i.e. the ANN weights) are not modified.\"]}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": [\"Dear Reviewer 1,\", \"Thank you very much for your review and your suggestions.\", \"It is very important to emphasize that within this paper Active Tuning is not applied during or for training (it is nonetheless a good idea to incorporate it during training). Instead, the hidden states of pre-trained models are optimized based on the prediction error-induced gradient information. The outputs of the RNNs are thus only driven by the continuously applied gradient-based tuning of the hidden states. Sorry, if we did not make this clear enough. We hope that with inclusion of Algorithm 1 this becomes more comprehensible.\", \"To the best of our knowledge, there are no comparable approaches to exclusively tune the RNN's hidden states for inference purposes. The main statement that we tried to make with our paper is, however, that Active Tuning can dramatically improve an already existing model (without additional learning) and unfold robustness properties that have not been addressed during training. In order to further interpret the potential of the method, we incorporated results of temporal convolution networks (see updated Table 4 in subsection 4.3. as well as the new Table 5 in the appendix).\", \"We appreciate and agree on your point that we exclusively focused on noise filtering and noise robustness. To underline the potential of Active Tuning further, we now performed an additional experiment to demonstrate superior robustness to missing values in time series data when using Active Tuning (see Table 2 of the revised paper).\", \"Indeed, the potential computational overhead of using Active Tuning can not be neglected. Yet, this overhead, caused by a gradient-based mini optimization procedure within every global time step, scales with the number of tuning cycles C and tuning horizon R (both essentially depend on the problem, as can be seen in the Tables 6, 7 and 8 of the appendix). We are currently working on reducing the required numbers of C and R, which would significantly reduce the computational overhead. Nevertheless, since we admit the importance of this aspect, we have added a corresponding amendment to Section 2.\"]}",
"{\"title\": \"General response to the reviews and list of major modifications\", \"comment\": \"Thank you to all three reviewers for insightful comments, the criticism, and the suggestions. We tried to address the mentioned requests and suggestions and hope all of you will find the paper even more appealing now.\\n\\nBesides some general minor text reformulation and cosmetics, the major additions in the uploaded revision are as follows:\\n\\n- A formal algorithmic description (Algorithm 1) with additional explanations.\\n- Another evaluation that demonstrates that Active Tuning can also handle missing data (Table 2).\\n- We added a comparison to the performance of a temporal convolution network (TCN) on the spatiotemporal wave dynamics benchmarks (updated Table 4 and new Table 5 in the appendix, TCN setup cf. end of Section 3; results discussed in subsection 4.3 Wave Results).\\n\\nWe thus hope that we can convince also Reviewer 1 and Reviewer 2 to rate our paper above threshold after all (currently rating level 5 and 3, respectively; Reviewer 3 gave rating level 8).\"}",
"{\"title\": \"paper is well-written and clear. There are no related work discussed.\", \"review\": \"This paper introduces a propagation method to estimate RNN dynamic parameters during the learning process. The algorithm is introduced well and the paper is clearly written.\\nThe paper misses a related work section on other tuning methods or absence there of under special circumstances. \\nFor the same reason, I am not convinced on the extent of comparisons in the simulation results. A large amount of the focus of the experiments is on the robustness to noise which is fine if there was an equal amount of comparisons against other tuning methods. Otherwise, if the focus of the paper is supposed to be only on noise robustness, I think the motivation in abstract and introduction needs to be clearer.\\nWhile the motivation of the paper can be to some extent taken from the results, the introduction does not substantially motivate the problem. \\nLastly, I think that majority of details on pages 4 and 5 are unnecessary. Instead, I think a more detailed discussion on comparing additional computational cost of active tuning to other traditional methods would be very useful.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A novel method to tune autoregressive model via hidden state optimization\", \"review\": \"Paper proposes a way to adapt an autoregressive model (RNN in examples) to the incoming noisy signal to generate noise-free data output. The approach is interesting due to applying updates to the hidden state of the past observation. The proposed approached is named Active Tuning and evaluated on 3 toy tasks. The idea sounds interesting, however the lack of comparisons with other approaches and theoretical justification of why this approach is superior makes it hard to convince reader.\", \"quality\": \"Paper is well written and most of the concepts is clear. However, paper will benefit from a better explanation of the method, simpler diagram and equations to remove uncertainty on implementation.\", \"originality\": \"I believe the idea is novel and interesting for community. It has a potential to outperform meta-learning and sequence-to-sequence models on the task of model adaptation to noisy samples.\", \"pros\": [\"Idea is interesting and has potential.\", \"Explanation is clear, but still can be improved.\", \"Provided experiments show benefits of the proposed method with respect to direct regression task (same model trained with less or more noise amount)\"], \"cons\": [\"Comparison with other techniques such as meta-learning, sequence-to-sequence models is required to understand the potential of the method.\", \"Same comparisons might be interesting for tuning weights instead of hidden states. Or having only a small part of the model to be tuned (like the last layer).\", \"Application to more practical problems could benefit the paper. For example image denosing task could be relevant (works like Noise2Noise, Noise2Self etc)\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting approach in optimizing the internal dynamics of recurrent neural networks\", \"review\": \"This is an interesting paper on an idea introduced by the authors as active tuning. This paper is well-written and clearly explains the proposed active tuning scheme. I read the paper carefully multiple times, and feel that a few inclusions will help the readers better understand the proposed method.\\n\\nAt the base level, this paper builds on optimizing the internal dynamics of a recurrent neural network unlike optimizing internal weights in traditional sequence-to-sequence mapping. This is achieved by decoupling the recurrent neural activities from the input temporal signal and propagating the error (the difference between the estimated input value and the observed input value of the input signal) to tune the internal dynamics of the network. To demonstrate the effectiveness of active tuning the authors trained a distributed graph recurrent neural network (DISTANA) on three datasets with increasing complexity. Datasets included: multiple super-imposed sine waves, a chaotic double pendulum, and spatiotemporal wave dynamics. On average ten independent experiments were performed and the effectiveness of active tuning was evaluated using root mean square (RMS). Samples for the experiments were generated using five different noise ratios between 0 and 1 to measure the effectiveness of the proposed method for noisy data scenarios. The network was also trained on no noise to 0.05 noise induced into training data to see if it would help the models better generalize. The results as depicted in graphs show that active tuning is not only robust but generalizes well on noisy data.\", \"recommendations\": \"1. The active Tuning algorithm itself is missing from this paper. Even though the explanation is clear, it would help the readers to see the algorithm itself for better understanding. The reviewer referred to Hidden Latent State Inference in a Spatio-Temporal Generative by karlbauer et. al. 2020 (arXiv:2009.09823) for the algorithm. \\n\\n2. The authors confirm that 10000 and 1000 samples have been generated for all the problem domains tested. However, it is not clear if steps were in place to make sure that no bias was introduced during this sample generation. \\n\\n3. While the tuning length and tuning cycles were fixed for all three datasets, it is important to see how these values can be optimized based on the complexity of the time series data. Experimental results using a range of values for tunning length and tuning cycles would be beneficial.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
BIIwfP55pp | PERIL: Probabilistic Embeddings for hybrid Meta-Reinforcement and Imitation Learning | [
"Alvaro Prat",
"Edward Johns"
] | Imitation learning is a natural way for a human to describe a task to an agent, and it can be combined with reinforcement learning to enable the agent to solve that task through exploration. However, traditional methods which combine imitation learning and reinforcement learning require a very large amount of interaction data to learn each new task, even when bootstrapping from a demonstration. One solution to this is to use meta reinforcement learning (meta-RL) to enable an agent to quickly adapt to new tasks at test time. In this work, we introduce a new method to combine imitation learning with meta reinforcement learning, Probabilistic Embeddings for hybrid meta-Reinforcement and Imitation Learning (PERIL). Dual inference strategies allow PERIL to precondition exploration policies on demonstrations, which greatly improves adaptation rates in unseen tasks. In contrast to pure imitation learning, our approach is capable of exploring beyond the demonstration, making it robust to task alterations and uncertainties. By exploiting the flexibility of meta-RL, we show how PERIL is capable of interpolating from within previously learnt dynamics to adapt to unseen tasks, as well as unseen task families, within a set of meta-RL benchmarks under sparse rewards. | [
"Meta-learning",
"Imitation Learning",
"Reinforcement Learning"
] | Reject | https://openreview.net/pdf?id=BIIwfP55pp | https://openreview.net/forum?id=BIIwfP55pp | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"cytEEyIkqxM",
"OiOCvxXNRJj",
"rqbleA3Cpy",
"9g20BcE2LB8",
"gVaIZMvRgLt",
"NWmRMUyvkut",
"6PGapvU3L7",
"Fr8u3lcaNA",
"72J1jwumBL_"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040512305,
1606171748422,
1606165222736,
1606163567063,
1606163000371,
1604677765638,
1604115224792,
1604106266000,
1603868720045
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3734/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3734/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3734/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3734/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3734/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3734/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3734/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3734/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper proposes a method that combines imitation learning and meta-learning, which aims to be able to explore beyond the provided demonstrations.\\n\\nWhile the paper addresses an important topic, and the authors are commended on a productive conversations, there is a consensus among the reviews that the work is not yet ready for publication. The future manuscript should address: reexamine the assumptions and improve presentation.\"}",
"{\"title\": \"Thank you for your inquiry about the proposed method and the experimental results. Our replies to the questions are listed below.\", \"comment\": \"Dear reviewer,\\n\\nWe thank you for your valuable feedback and would like to address the following points.\", \"q\": \"This paper claims that PERIL is capable of exploring beyond demonstrations, but the tasks that this paper evaluates on don\\u2019t seem to require much sophisticated exploration. Substantiating these claims seems to require evaluation on tasks requiring more exploration.\", \"a\": \"We will relax this claim as we intend to raise that, even when the single demonstration is not sufficient to discern the task, the agent is still capable of exploring further to adapt to that unseen task.\\n\\nWe will also like to thank you for your comments which indicate how to improve our related works sections. \\n\\nWe are delighted to hear that, given further clarifications, you would recommend a higher score. Given the reviews from other reviewers, alongside your review, we have acknowledged that we did not have time to make all the necessary changes which would satisfy all of the reviewers. For that matter, we will use all of this feedback to polish our work further and submit again in the following term. \\n\\nThank you again and I hope that you find our clarifications and our decision regarding delaying PERIL\\u2019s submission appropriate.\", \"c\": \"There are quite a few undefined loss functions in line 12 of Algorithm 1\\u2026\"}",
"{\"title\": \"Thank you for your inquiry about the proposed method and the experimental results. Our replies to the questions are listed below.\", \"comment\": \"Dear reviewer,\\n\\nWe would like to thank you for your detailed feedback. We see this as an opportunity to clarify our results and our methodology, as well as chance to present further studies which may concretely point out the contributions of our approach. We are happy to hear that, if well substantiated, you believe we could be delivering some very significant work. In light of this we are aware of the changes required to polish our paper and we have thus decided to postpone our submission and use your feedback, along with that from the other reviewers, accordingly. \\n\\nWe would like to thank you for your comments, and will pay particular attention to:\\n- Improving our review on meta-RL and meta-IL methods\\n- Clarifying the differences with PEARL\\n- Clarifying how we include imperfect demonstrations into the demonstration buffers (which theoretically make IL more robust to high-entropy demonstrations).\\n- Include confidence intervals for the experiments\\n\\nWe would like to clarify the following issues.\", \"q\": \"\\\"4.2: It is very unclear what the authors mean by \\\"zero-shot\\\" learning. By my estimation, this method always requires some samples of the target environment to attain the presented performance, making it squarely a few-shot domain.\\\"\", \"a\": \"It is unfortunate that we did not clarify this enough. We believe this is our strongest result (and contribution): by giving PERIL a single demonstration of an arbitrary task from within i.e. the 4 2D task families, it can perform that task successfully without any adaptation stages (In contrast to many other methods).\", \"c\": \"\\\"3.3: The included equations don't seem to add much to the paper's story and seem to recapitulate well-known results from RL, variational inference literature, or cited work.\\\"\"}",
"{\"title\": \"Thank you for your inquiry about the proposed method and the experimental results. Our replies to the questions are listed below.\", \"comment\": \"We have acknowledged that we require further experimental validation via additional baselines, in addition to clarifying and identifying the contributions of each component in PERIL. For that matter, we have decided to use your feedback amongst that from the other reviewers and postpone our submission.\\nIn no lesser extent, we are grateful for you valuable feedback which we will most definitely use to shape our new submission. We thank you for your comments which you have raised and hope that you find our answers (see below) and our decision appropriate.\", \"we_will_look_into\": \"- Changing the historical introduction to Meta-Learning, introducing further studies regarding meta-IL (we accidentally used the wrong citation for \\u201cbottom of page 2\\u201d), and further accrediting other authors with contributions to the field\\n-Clarifying the problem formulation further (VAE, ELBO)\\n-Clarifying why $L_{BC}$ is not included in our objective (reasons are as pointed out)\\n-Adding other baselines which clarify contributions of each component of PERIL\\n\\nWe also wanted to answer and/or ask you some questions about some of your concerns which we did not quite understand.\", \"q\": \"Section 4.2 Table 1, how is 0 possible?\", \"a\": \"Table 1. 0 is possible because it takes 0 adaptation rollouts (which we will now call $K_\\\\tau$ as suggested by Reviewer 5) to succeed at performing this task. These values are computed by finding at which rollout $K_\\\\tau$ the agent successfully completes the task, given that the next 2 attempts will also be successful. We are aware that this was not clarified given the 8-page constraints but we will attempt to clarify this.\"}",
"{\"title\": \"We thank reviewer 5 for their constructive feedback and suggestions.\", \"comment\": \"Dear reviewer,\\n\\nWe deeply appreciate the effort you have put into reviewing our paper. We take your feedback as an opportunity to further develop evidence which supports our proposed method. Given the time constraints during the rebuttal period, we have decided to use your feedback to present a more thorough submission of PERIL in the future. In particular, we thank you for:\\n\\n- Your comments on how to problem formulation: we will stick to option A and will have Key2D as an example of a system which requires demonstrations to rapidly condition exploration. \\n- Your views on how to clarify the problem by adding $K_d$ demonstration and $K_\\\\tau$ rollout terms.\\n- Your indications on which baselines we could add to further understand the contributions of the different components in PERIL.\\n\\nWe also thank you for other comments which contribute to our understanding to which areas (especially in our methods section) require further clarification.\"}",
"{\"title\": \"review\", \"review\": \"Summary:\\n\\nThis paper introduces PERIL, a meta RL method that combines demonstration trajectories and trajectories collected by the policy, in order to adapt to a new task. To this end, the authors combine ideas from metaRL (specifically from PEARL (Rakelly et al. 2019) and Humplik et al (2019)) where a set encoder is used to encode trajectories to a latent vector describing the task, with imitation learning techniques by (a) training this encoder also with demonstrations (b) initialising the latent vector at test time by feeding demonstrations through the encoder, and (c) having additional losses inspired by metaIL techniques. The motivation is that using demonstrations allows us to learn tasks that are difficult otherwise, for example because the rewards (at test time) are sparse.\", \"overall_impression\": \"I like the idea of using demonstrations for metaRL when tasks are sparse. Many metaRL methods do not work well in sparse reward tasks, and using expert demonstrations is a nice way of guiding the agent towards behaviour that can solve the task. Empirically, the proposed method PERIL outperforms the baselines PEARL and MetaIL, so that is promising. The authors provide analysis of the latent space which nicely illustrates what the method has learned. However, PERIL is quite complex since it consists of many different parts and loss terms (six if I counted correctly), and it needs demonstrations + interactions + (sparse) reward signals at test time. I found it hard to keep track of everything and make sense of how these parts fit together. From both the text and the empirical results, it is not clear to me why all the parts are necessary / what they do, and I am left wondering if a simpler approach would work as well. The notation and mathematical formulation in the paper is not polished enough (there are inconsistencies, variable name clashes, some parts of the objective function not properly introduced and explained) which added to my confusion. Therefore, even though the idea seems promising, I think the paper is not quite ready for publication.\", \"questions\": [\"In the introduction you say that MetaIL methods have the drawback that \\\"after adaptation, they cannot continue to improve the policy in the way that RL methods can\\\". You say that you method PERIL \\\"allows for continual policy improvement through further exploration of the task\\\". I have a few questions about this.\", \"Since only the latent embedding is updated, doesn't PERIL also suffer from the fact that the policy cannot be improved in the way that RL methods can (but instead, all adaptation is that within the limits of task inference)?\", \"Why is additional exploration at test time even necessary, if we have expert rollouts and the policy itself isn't actually updated (the only thing that's adapted at test time is the latent embedding)? If all the demos + trajectories are used for is task inference, then shouldn't the expert demos always be sufficient?\", \"You motivate your approach by saying that at test time, it is useful if the expert does not have to provide a shaped reward. However, you do make use of a shaped reward during training - this is a limitation that should be discussed in the paper. In addition, you still need (sparse) rewards at test time. Are those really necessary, given that you have a demonstration of the task? Did you test PERIL without those sparse reward inputs to the encoder?\", \"Table 1, how was the agent trained? Was it with number of adaptation trajectories k>0? If so, what if you would train the agent with k=0? On the other hand, can you get good zero-shot adaptation performance by just increasing the number of demonstrations?\", \"You say $d_\\\\lambda$ is a VAE, but if I understand your setup correctly then $d_\\\\lambda$ is only the decoder of a VAE right? And Eq 8 is the reconstruction loss? Which also means it's not technically a VAE, because in encodes and decodes different things (encodes trajectories, decodes task descriptors - Humplik et al. 2019 describe this as an information bottleneck). Shouldn't there also be a KL term somewhere here?\", \"Where is $L_{bc}$ used? It's not part of Eq (7), but I also can't find it anywhere else except in Algo 1 and Fig 2. And what about $L_{mi}$, it's only in Algo 1 but nowhere else? Fig 2 has $L_{KL}$, where is that from? It would really help my understanding of piecing everything together if there was one single equation somewhere, that includes all loss terms. For each loss term, I as the reader want to clearly understand where it comes from, and why it is necessary (see suggestions for additional baselines/ablations below).\", \"Suggestions / Feedback:\", \"The problem formulation and the proposed solution don't match. In your problem setting you say you're in a general POMDP where the true state may be partially observed, but in your algorithm you rely on the fact that it's a POMDP only w.r.t. the task (i.e., reward and transition function) and *not* w.r.t. the environment state. That's an important difference! To explain that in more detail: in the introduction and problem setting you say $z$ models the true underlying state $s$ which can change at any moment: your transition function is $p(o',z'|o,a,z)$ where $o$ are observations. However, the entire formulation in your algorithm relies on the fact that $z$ does _not_ change over time, but instead describes a fixed task. That's also what PEARL does, which is what your formulation is based on. I think there's two ways to resolve this: (A) Either change the problem setting such that $z$ is fixed throughout time and define the transition function as $p(s'|s,a,z)$ where the environment state $s$ is now fully observable, or (B) change the algorithm to actually model a belief over a latent $z$ that can change over time. Option (A) is probably an easier fix, but then you might also have to change some of the environments (if I understand correctly, in Key2D the state of the handle is unobserved and can change the unobserved environment state).\", \"Section 3.1, I would add explicitly what the objective of the policy is (both in writing and in a mathematical expression). You aim to maximise the return of a policy that conditions on $K_d$ demonstrations, and which has interacted with the environment for $K_r$ rollouts (changing the notation here to make the distinction clear). From there it is easy for the reader to see what happens if you set $K_d=0$ (you get something more similar to PEARL), and what happens when you set $K_r=0$ (which is the zero-shot case). It's good to contrast this for the reader, and explain / show empirically why and when $K_d>0$ and $K_r>0$ is necessary. (See my comment on baselines below.)\", \"To understand PERIL better, I would suggest to add a few baselines.\", \"PEARL with a pre-initialised buffer that contains the demonstrations. The encoder and policy will be trained as normal, but there's additional data coming from the buffer that contains expert trajectories. Since PEARL uses an off-policy algorithm it is possible to train the policy with this data. I think this is an important comparison, because it's a very simple way to incorporate demonstrations into PEARL and it would be good to understand if/when/why this works/doesn't work.\", \"In addition to the above, use the demonstrations at test time to initialise the context in PEARL. This is very close to the setting in PERIL, except that some parts of the objective function are missing ($L_{info}$, $L_{aux}$, $L_{bc}$, $L_{mi}$ - I think). This would give insight into whether those additional losses are truly necessary (currently you only have ablations on $L_{aux}$).\", \"Zero-Shot PERIL. There is some analysis of this in Table 1, but I think it would still be helpful to add this baseline. Does it work well for within-task-distribution adaptation (Fig 4) and not so well for settings that require more generalisation (Fig 5)? What if we just throw in more demonstrations, is that sufficient to do zero-shot adaptation or do we really need the policy rollouts? I think this is a central question that should be very clearly answered in the paper. Table 1 is a good start but this analysis can be expanded.\", \"Humplik et al. (2019), but with additional demonstration data to train the encoder/decoder. Again, this is the simplest way to incorporate the demonstration data into this method without explicitly making use of it at test time. This comparison would tell us something about why the demonstrations are necessary - are they necessary during training but not at test time, or the other way around, or are they necessary both during training and testing?\", \"Not sure I got everything, there's still $L_{bc}$ and $L_{mi}$ which I'm not entirely sure where they come from and if they are necessary. But basically, I think it's really important to analyse which parts are necessary - and make the method as simple as possible if you find some parts are not necessary.\", \"Smaller comments (didn't influence my score):\", \"There's a clash between using the variable k/K for the demonstrations (e.g., Sec 3.1 \\\"primal inference\\\", Sec 4.1 first sentence, Fig 7), and for the number of policy rollouts (e.g., algorithm 1 line 5, Table 1). This is confusing, so I strongly suggest using two distinct variable names (something like $k_d$ and $k_r$ also works).\", \"Similarly it would help if you use two separate notations for the trajectories $\\\\tau$ that come from the demonstrations, and the ones that come from the policy. Throughout Section 3 I don't always understand which one of the two you are talking about.\", \"Your references need fixing. Some of them are without a year, and some say technical report even though they were published at a conference (e.g. Finn et al., Rakelly et al.). Your \\\"Wang\\\" reference for RL2 also seems wrong (first sentence in related work)? It should be Jane Wang et al. \\\"Learning to reinforcement learn\\\". The way I always get my bibtex entries is via scholar.google.com: search for the paper there; click on the \\\"cite\\\" button under it and then \\\"BibTeX\\\" (Double check whether the paper was published somewhere though, google scholar often only puts the arXiv link then actually the paper was published somewhere. Sometimes the authors also put the correct bibtex comment on their homepage/github with the code).\", \"Fig 1 left, there's a typo: \\\"learn to lean\\\" -> \\\"learn to learn\\\"\", \"For your experiments, I would call PERIL-A only \\\"PERIL\\\" (since this is your full method, including all losses), and then call the *ablations* different, so for example PERIL-noAux when you remove the auxiliary loss.\", \"All figures should have some form of indication of the error/std/confidence interval (using shaded regions around the mean for example).\", \"Sec 4.2, explain what the multi-task family setting is and why it is challenging.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Writing is not clear and needs some work, Connection to prior work needs extra attention, An additional baseline is needed\", \"review\": [\"## Summary of work\", \"This work proposes PERIL, a method for combined Meta Imitation Learning and Meta Reinforcement Learning using context-based meta-learning. Given a set of demonstrations, a latent variable representing the desired task is inferred, and trajectories are generated conditioned on the inferred latent variable. The data from the expert demonstrations and trajectories are used for meta-learning updates.\", \"## Review\", \"Key comments below are included in bold text.\", \"Related Work\", \"I don't agree that meta-learning was conceptualized as an RNN-based task. There are many variants of meta-learning, RNN-based, MAML (optimization based), Encoder-based (Neural Processes), etc.\", \"It seems that \\\"Scalable Meta-IRL Through Context-Conditional Policies\\\" (Ghasemipour et al. 2019) and maybe \\\"Robust Imitation of Diverse Behaviors\\\" (Wang et al., 2017), are closely related context conditional meta learning methods (similar to Yu et al. which you cited). They may merit citation.\", \"Bottom of page 2: Why do they have the traditional caveats? They are using rewards, so they should be able to do better.\", \"Section 3\", \"Section 3.1: A number of works which also combine Meta-IL & Meta-RL use the setup you use as well (Primal Inference + Exploratory Adaptation). I think you should acknowledge this and provide citations (such as Mendonca et al., Zhou et al., Gupta et al. which you cited in other sections).\", \"Section 3.2: What is a Variational Encoder? This is not a standard term.\", \"Section 3.2: Please clarify how the KL relates to the mutual information. While this may be a simple connection, I think it should be explained better.\", \"__Section 3.2: Where is equation 1 coming from? It looks like the Evidence Lower Bound but $\\\\mathcal{G}(\\\\mathcal{T}|z)$ is not specified and could be a non-likelihood objective function. Please clarify.__\", \"__Section 3.2: You mention that you are using an encoder in the same manner as Rakelly et al., but you do not actually describe the method in your manuscript and how you perform context aggregation. Please include a complete description in the main manuscript.__\", \"__Section 3.2: First paragraph of page 4 requires significantly better clarification. Specifically the sentences \\\"This is generally not ... in unseen environments.\\\" It is not clear why you are saying your context-based approach is better.__\", \"__Section 3.3:__\", \"__While you do acknowledge Yu et al. in the beginning of this section, you understate the extent to which this is similar to their mutual information objective. Equations 3&4, \\\"Learning from demonstrations\\\", and \\\"Linking variational posterior\\\" are effectively identical to Yu et al.'s with different variable names. You should clarify and attribute credit to their work in a more clear manner.__\", \"__I do acknowledge that you have tried to address the intractability of $\\\\mathcal{L}\\\\_{info}$ in a different manner than Yu et al., but the main manuscript would benefit from a brief explanation for why you took this route (even though you have derivations in the appendix)__\", \"__You need to explain why $\\\\mathcal{L}\\\\_{BC}$ is not included in your objective. From looking around in your appendix, I think this is because you are estimating the $p(z)$ marginal with the expectation distribution as in equation 5, so you don't take the gradient with respect to $q\\\\_\\\\phi$. This still needs explanation in my opinion.__\", \"__Section 3.4: First sentence, what do you mean by unsupervised? You are using the critic to train the model the generates $z$. Please elaborate.__\", \"__Section 3.4: Sentence before equation 9, doesn't this depend on what information is included in the vector $b$?__\", \"__General Question: What is the reinforcement learning algorithm you are using?__\", \"__Section 3.5: I don't understand what you mean by $\\\\mathcal{D}^{\\\\mathcal{T}}$?__\", \"Section 4\", \"Please explain what the auxiliary informations are for each task.\", \"__Baselines__\", \"__As far as I can tell, you have not compared to any prior method that combines Meta-IL and Meta-RL (such as Mendonca et al. or Zhou et al. which you cite in your work). I think this is a big flaw of your experimental section. Including this would demonstrate whether your latent-based meta-learning approach is better or not from prior works with used other types of meta-learning, which should be your main value proposition. I think latent-based might work better than the MAML-based as in some prior work, but you have not provided any experiments to support this.__\", \"__Why are you comparing to noisy BC and not normal BC (also from your description of noisy BC is not clear what the method is)? Also please clarify what BC loss you are using (e.g. mean-squared error loss, maximum likelihood with learned variances, or whatever you used).__\", \"__I think it would be valuable to also include one setting where only the auxiliary loss + critic loss (no mutual info) is used to see the the value of the additional mutual information losses.__\", \"Section 4.1: How many train/test tasks in each task family.\", \"__Section 4.1: You mention sparse rewards here, but in the appendix there is information about \\\"dense rewards during meta-training\\\" for the critic. I don't quite understand what is happening here. Are you using sparse or dense rewards? And is every baseline method that uses rewards use the same type of reward? Please clarify.__\", \"__Section 4.2: Do you mean that you are training a single agent on all tasks simultaneously? How are you doing this when in the observation/action spaces are different? Do all these have the same observation/action space?__\", \"__Section 4.2: Last sentence of paragraph 1 needs elaboration. I don't think you have provided any explanation for these experiments, for example how you are setting up experiments for adapting to unseen dynamics.__\", \"__Section 4.2: Table 1, how is 0 possible? Also, you haven't provided explanation for how these values are computed.__\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A very ambitious work which falls short on the science\", \"review\": [\"## Summary of the Work\", \"The work propose a method which allows us to synthesize meta-RL and meta-IL, by pre-training and conditioning a context-based off-policy meta-RL algorithm on imitation data. Strongly inspired by PEARL (Rakelly, et al 2019) and meta-IL (Yu, et al 2020), this method outperforms previous methods by varying margins on a range of newly-introduced 2D and 3D robotics tasks. The work introduces several new design elements and losses to this family of methods, and the experiments do not make clear which ones are responsible for the increased performance. Additionally, it's not clear the included experiments can fully substantiate the long list of claims provided by the authors, not the least of which is that their method performs zero-shot adaptation to new tasks.\", \"## Pros and Cons\", \"### Pros\", \"Addresses several important problems in adaptive RL\", \"generalizing to complex tasks and outside-of-distribution tasks\", \"using demonstration data to avoid costly random exploration\", \"fine-tuning policies acquired with meta-RL/meta-IL\", \"Hyperparameters and reward functions are well-documented\", \"The visualizations are clear and helpful\", \"Overall the work is well-organized\", \"### Cons\", \"Makes many claims about the method which are difficult to fully substantiate in such a short paper\", \"Treatment of prior work is limited to only a few methods from the past few years, and does not acknowledge many prior works in context-based meta-RL/MTRL. Relationship to the very-similar prior works PEARL and meta-IL is unclear.\", \"Experiments section makes it very difficult to compare presented results to prior work\", \"Lack of ablations make it unclear which (of many) new design decisions are responsible for the method's performance\", \"Experiments don't appear to make a credible simulation of demonstration data which might be encountered in the real world\", \"Claims of applicability to robotics necessitates comparisons with robotics benchmarks (See MetaWorld and/or RLBench).\", \"Presented method is very complex\", \"Experiments lack multiple seeds and any statistical significance tests\", \"Terminology and notation often veers far from previous works and conventions, making it difficult to parse. Sometimes writing is ambiguous or unclear.\", \"## Evaluation\", \"### Quality\", \"3/5\", \"The presented work and experiments are high-quality in their motivation, implementation, and usually their presentation. I think the quality of this work suffers when we consider how well it positions itself with respect to prior work (not well), and the level of detail with which it explores and substantiates each of its claims (of which there are many, making it impossible to address any of them fully). The method section is extensive and recapitulates in detail many concepts from RL, variational inference, IL, etc. It can probably be more sparse to make room for a better treatment of prior work and more experiments/analysis of the work's claims.\", \"### Clarity\", \"2/5\", \"This work suffers from significant clarity issues, mostly around use of language (e.g. zero-shot learning vs few-shot learning), non-standard notation and terminology (e.g. \\\"primal inference,\\\" \\\"privileged information,\\\" etc.) and the use of equations which are not really necessary to demonstrate a point,\", \"### Originality\", \"3/5\", \"While original in nature (for instance, introducing privileged information, considering fine-tuning and off-distribution tasks, using demonstrations for pre-training, etc.), the work does not make it easy for the reader to divine how its contributions build on significantly-similar previous works which are highly-cited. This does it and the reader a disservice.\", \"### Significance\", \"2/5\", \"If all claims in the work are well-substantiated, it can be a very significant work, but I don't believe they are substantiated. For instance, the introduction mentions low-dimensional observation spaces as a limitation of current meta-RL/IL work, but the experiments don't seem to contain any use of high-dimensional (e.g. image) observation spaces. It's not clear how the claim of zero-shot learning is supported.\", \"### Misc Editorial Comments and Reviewer's Notes\", \"#### Claims\", \"Addresses three limitations of current meta-RL/IL\", \"shaped reward functions\", \"constrained low-dimensional action/observation spaces\", \"requires hand-defining low-dimensional observation/action spaces\", \"A hybrid framework which combines the merits of RL and IL\", \"tasks defined only using demonstrations\", \"unlike other (meta)-IL algorithms, allows for improvement of the policy after adaptation\", \"Uses only proprioceptive actions from the agent, and implicitly recovers the external environment state\", \"Allows for learning new tasks without requiring any expert knowledge in the human teacher\", \"Achieves \\\"exceptional\\\" adaptation rates and is capable of exploiting demonstrations for efficient exploration\", \"Outperforms other meta-RL and meta-IL baselines\", \"Is capable of zero-shot learning\", \"Is capable of multi-family meta-learning and out-of-family meta-learning through clustering in the latent space\", \"Shows how we can use privileged information (during training) to create an auxiliary loss for training the embedding function, allowing us to recover the \\\"true underlying state\\\"\", \"#### Mechanisms\", \"Represents a task by a latent vector, which is the belief state of the task given a demonstration\", \"Meta-training encourages high mutual information between the demonstration data and latent space\", \"After adaptation, the agent can explore and update the latent space\", \"#### 1. Introduction\", \"\\\"hand-crafted, shaped reward functions...\\\" nothing in the meta-RL formulation requires shaped reward functions, as opposed to a sparse ones which are easier to craft. Granted, on-policy meta-RL algorithms are challenges by sparse reward settings in a similar fashion that on-policy RL algorithms are challenged by sparse reward settings, but this is a property of RL in general and not just meta-RL. [2] is a meta-RL method which can cope with sparse rewards, and is extensively-cited in this work.\", \"\\\"defining a low-dimensional..\\\" this is not a limitation per se of meta-RL or meta-IL -- there's nothing in their formulation which necessitates meta-RL operating on low-dimensional state as opposed to images, though it is certainly a design challenge. See [1]\", \"\\\"different task families\\\" Perhaps I have misunderstood the authors, but this seem orthogonal to the purpose of the work and of meta-RL. It is not immediately clear what the authors mean by \\\"task families.\\\" While there is certainly work on cross-domain transfer in IL and RL, adapting policies to different action and/or observation spaces is not a typical goal of meta-IL and meta-RL algorithms, so it seems strange to level this critique.\", \"[see \\\"claims\\\" above]\", \"#### 2. Related Work\", \"Meta-learning and meta-RL far predates Wang and Duan. Please see [3,4, etc.]\", \"Modern work on context-based meta-RL and adaptive RL predates PEARL. Please see [5, 6, 7] which all perform variational inference on trajectories to generate a latent context, which can then be used for adaptation\", \"#### 3. Method\", \"The proposed approach seems hardly different than PEARL[2] with the following changes. The reviewer may have missed something, but given close relationship between these methods, please make crystal clear for readers the differences between this method and the substantially-similar PEARL method.\", \"Pre-training uses demonstration data rather than RL episodes\", \"This work studies what happens if you continue training after adaptation\", \"Introduces an auxiliary loss which allows conditioning on privileged information\", \"3.2: \\\"Traditional meta-RL methods leverage RNN-based\\\" - This is hardly true in any universal sense. Previous meta-RL method have used RNNs [8], variational inference [2], autoregressive models (attention)[9], hierarchy [10], exploration policies [11], etc. to implicitly model the latent task space.\", \"3.3: The included equations don't seem to add much to the paper's story and seem to recapitulate well-known results from RL, variational inference literature, or cited work.\", \"3.5: The use of a SAC expert trained on full-state versions of the environment is not a faithful simulation of expert data, which will be significantly noisier and lower-entropy than a SAC expert, and also is unlikely to be optimal according to any RL loss. I think that this reduces the IL aspects work to a form of offline RL where the offline data source is a SAC policy, and the results demonstrate that the method can reconstruct the privileged information which was available to the SAC agent. The authors note that they augment the demonstration data with \\\"imperfect demonstrations\\\", but are silent about how this is achieved (and it must be done with great care).\", \"#### 4. Experiments\", \"4.1: Though it is ambiguous from the text, these experiments seem to either present 1 experiment per method, or the average of 3 experiments per method. This is unfortunately not enough data to make a statistically meaningful comparison of the performance, especially considering the small performance differences involved. Please see [12] for a handy guide on how to compare performance. In short, you will likely need 10 seeds for each experiment and should conduct a statistical test to ensure your differences are real. Please include a 95% confidence interval in your plots for the benefit of readers.\", \"4.1: These experiments are meaningful and helpful, but it's also important to readers that they can verify you have reasonable implementations of the comparison methods. This necessitates providing some results for some of the environments used in Yu, et al and/or Rakelly, et al. How is the reader to know your implementation or hyperparameters are fairly representing the comparison methods?\", \"4.2: I think the reader would benefit from seeing t-SNE plots from the comparison methods as well. The presented plots look very similar to t-SNE plots generated by plotting samples from a PEARL posterior.\", \"4.2: It is very unclear what the authors mean by \\\"zero-shot\\\" learning. By my estimation, this method always requires some samples of the target environment to attain the presented performance, making it squarely a few-shot domain.\", \"This work introduces many new design elements on top of PEARL and Yu et al, and it's unclear which of them are responsible for the observed performance. Please include ablations which compare your method's performance without each new design element, to demonstrate the impact of each.\", \"[1] https://arxiv.org/pdf/2006.07262.pdf\", \"[2] http://proceedings.mlr.press/v97/rakelly19a/rakelly19a.pdf\", \"[3] http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.1796\", \"[4] https://www.sciencedirect.com/science/article/abs/pii/S0893608002002289\", \"[5] https://openreview.net/pdf/29c35690ff52463c84d9456ab511e4c944ddbea4.pdf\", \"[6] https://arxiv.org/pdf/1809.10253\", \"[7] https://arxiv.org/abs/1806.02813\", \"[8] https://arxiv.org/abs/1611.02779\", \"[9] https://arxiv.org/abs/1707.03141\", \"[10] https://arxiv.org/abs/1710.09767\", \"[11] https://arxiv.org/abs/1802.07245\", \"[12] https://arxiv.org/abs/1904.06979\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review\", \"review\": \"Summary: This work seeks to efficiently learn new tasks by combining meta-RL and imitation learning (IL). Such a combination is a natural thing to try, as both lines of work improve sample complexity of learning a new task: meta-RL by leveraging experience on prior related tasks, and IL by leveraging demonstrations. Demonstrations also form a natural way of specifying a new task to the agent.\\n\\nThis paper extends an existing meta-RL approach (PEARL) to additionally leverage demonstrations. This leads to strong results on a set of 2D problems, as well as a 3D reaching task.\\n\\nSpecifically, PEARL is a Thompson-Sampling approach, consisting of two main components: (i) a learned posterior distribution $q(z \\\\mid c)$, which encodes a distribution over latent $z$\\u2019s reflecting the task conditioned on previously observed states, rewards, and actions $c = s_0, a_0, r_0, \\u2026$; and (ii) a context-conditioned policy $\\\\pi(a \\\\mid s, z)$, which is used for both exploration to infer the task, by producing more trajectories for the context $c$, and for solving the task, once uncertainty over $z$ is low. Specifically, this paper modifies PEARL in the following ways:\\n- The context-conditioned policy is trained with an objective similar to behavior cloning to produce trajectories that match the demonstrations, in addition to learning from normal reward signal.\\n- The posterior $q(z \\\\mid c)$ is trained to produce $z$\\u2019s that encode information about the task, given demonstrations for $c$.\", \"strengths\": [\"This paper studies an important problem: how can we quickly learn new tasks? For many real-world RL tasks, we want policies that can quickly adapt to new tasks without retraining from scratch. This paper observes that prior approaches have drawbacks: IL on its own can be data-hungry, requiring additional roll-outs or many demonstrations; and meta-RL can be challenging with sparse rewards. Therefore, combining the two is a natural and promising direction to investigate.\", \"The experimental results are generally quite encouraging. PERIL substantially outperforms the baselines, and can even generalize to a fairly wide distribution of 2D tasks (i.e., the same policy can learn to simultaneously do reaching, peg-placing, and key-rotating tasks, while existing works typically learn a narrower task distribution).\"], \"weaknesses\": [\"Fairly strong assumption. This paper assumes that an expert distribution $p_{\\\\pi_E}(\\\\tau \\\\mid z)$ over trajectories conditioned on the learned latent $z$ is available. This seems to be a fairly restrictive assumption, since $z$ is learned by PERIL, and therefore, it seems unreasonable for an expert policy to also be able to condition on $z$. Instead, it would be nice if we could relax this to only condition on e.g., the observation $o$.\", \"Clarity. While the high-level approach is clear, many of the details are confusing and unclear, which makes it challenging to evaluate this approach. I list the main points of confusion below.\", \"The problem statement defines the task in terms of $z$, which is confusing, because $z$ should be part of the approach, rather than part of the setting. In particular, it\\u2019s unclear what it means for the dynamics model to condition on $z$. It seems like this may be mixing the learned latent $z$ with the true state $s$? More generally, the problem statement (Section 3.1) mixes the approach with the problem setting, which makes it confusing to understand what is a constraint due to the setting, and what\\u2019s a design choice for the approach.\", \"The principled way to optimize Eq (3) is to maximize variational lower bound (Barber & Agakov, 2003), by substituting the posterior $p(z \\\\mid \\\\tau)$ with an arbitrary function $q(z \\\\mid \\\\tau)$. This appears to be what the paper is doing, but the current phrasing is pretty unclear. In particular, it\\u2019s unclear to me how $\\\\mathcal{L}_\\\\text{info}(z)$ is optimized / defined. How is $p(z \\\\mid \\\\tau)$ defined? It\\u2019s clear how you can do this in the case where the task descriptor is available, e.g., in Section 3.4, but in general, it\\u2019s unclear what the learned $z$ should be. Is this from leveraging the latent space of the expert SAC agent? What is $\\\\pi_b$ in Equation 6?\", \"The notation for the task-dependent objective $G(\\\\tau)$ seems unnecessary and serves to distract \\u2014 in particular, it\\u2019s not initially clear why we need this and not just maximizing the expected discounted rewards. I would suggest removing this notation, and just saying at the end of the approach: \\u201coverall, we minimize the following loss: $\\\\mathcal{L}_\\\\text{critic}(z) + \\\\mathcal{L}_\\\\text{info}(z) + \\\\mathcal{L}_\\\\text{aux}(z) + \\\\ldots$.\", \"There are quite a few undefined loss functions in line 12 of Algorithm 1, in particular $\\\\mathcal{L}_\\\\text{mi}$ and $\\\\mathcal{L}_\\\\text{D_KL}$.\", \"Related work. This paper generally seems to lack appropriate citations in several key places.\", \"In the introduction, several key areas seem to require citations (e.g., citations for meta-IL, posterior sampling with meta-RL should cite PEARL, claims that meta-RL requires shaped rewards / claims that meta-IL cannot adapt afterwards).\", \"The following references seem highly relevant to related works section on exploration in meta-RL: [1], [2], [3].\", \"Experiments.\", \"Why is the behavior cloning baseline with noisy demonstrations? It seems like the fair comparison should be BC w/o noise.\", \"This paper claims that PERIL is capable of exploring beyond demonstrations, but the tasks that this paper evaluates on don\\u2019t seem to require much sophisticated exploration. Substantiating these claims seems to require evaluation on tasks requiring more exploration.\", \"I am initially recommending rejection, due to the aforementioned weaknesses. I believe that the related work and clarity could be improved during the rebuttal period, which would help me raise my score, although I find the strong assumption to be a fairly serious weakness.\"], \"additional_comments\": [\"Ill-formatted citations. Many of the citations are missing the year, e.g., Zhou et al., Ross et al., Mendonca et al., Duan et al., Yu et al.\", \"Contextons \\u2014> contexts?\", \"Variational Autoencoders are inconsistently abbreviated as VE and VAE. Seems like it should follow the standard of using VAE.\"], \"references\": \"[1] VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning. Luisa Zintgraf, Kyriacos Shiarlis, Maximilian Igl, Sebastian Schulze, Yarin Gal, Katja Hofmann, Shimon Whiteson. Oct. 2019. ICLR 2020. https://arxiv.org/abs/1910.08348.\\n\\n[2] Explore then Execute: Adapting without Rewards via Factorized Meta-Reinforcement Learning. Evan Zheran Liu, Aditi Raghunathan, Percy Liang, Chelsea Finn. June 2020. ICML LifelongML Workshop 2020. https://openreview.net/forum?id=La1QuucFt8-.\\n\\n[3] Environment Probing Interaction Policies. Wenxuan Zhou, Lerrel Pinto, Abhinav Gupta. July 2019. ICLR 2019. https://arxiv.org/abs/1907.11740.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
q3KSThy2GwB | Practical Real Time Recurrent Learning with a Sparse Approximation | [
"Jacob Menick",
"Erich Elsen",
"Utku Evci",
"Simon Osindero",
"Karen Simonyan",
"Alex Graves"
] | Recurrent neural networks are usually trained with backpropagation through time, which requires storing a complete history of network states, and prohibits updating the weights "online" (after every timestep). Real Time Recurrent Learning (RTRL) eliminates the need for history storage and allows for online weight updates, but does so at the expense of computational costs that are quartic in the state size. This renders RTRL training intractable for all but the smallest networks, even ones that are made highly sparse.
We introduce the Sparse n-step Approximation (SnAp) to the RTRL influence matrix. SnAp only tracks the influence of a parameter on hidden units that are reached by the computation graph within $n$ timesteps of the recurrent core. SnAp with $n=1$ is no more expensive than backpropagation but allows training on arbitrarily long sequences. We find that it substantially outperforms other RTRL approximations with comparable costs such as Unbiased Online Recurrent Optimization. For highly sparse networks, SnAp with $n=2$ remains tractable and can outperform backpropagation through time in terms of learning speed when updates are done online. | [
"recurrent neural networks",
"backpropagation",
"biologically plausible",
"forward mode",
"real time recurrent learning",
"rtrl",
"bptt"
] | Accept (Spotlight) | https://openreview.net/pdf?id=q3KSThy2GwB | https://openreview.net/forum?id=q3KSThy2GwB | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"GFORc7FXoY",
"7fQRj8Pl8cr",
"P9D_0VmJvB",
"YmBTOBfh_e",
"9vhkZaoRijT",
"hhBFPhSeNRm",
"KaL-7mvRuJp",
"tb7YY5cWtAJ",
"R5O9q72g5Fe",
"0UIaJprNIhM",
"9MvVOSfxl2",
"w51Nms2hUSG",
"R5dyJMrzI1a"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040392546,
1606261516858,
1606250523203,
1606250489436,
1606175232368,
1605644253694,
1605643960612,
1605643735165,
1605643641895,
1603924424430,
1603904653060,
1603862215843,
1603843492240
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3731/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3731/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3731/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3731/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3731/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3731/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3731/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3731/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3731/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3731/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3731/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3731/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper introduces a method for approximating real-time recurrent learning (RTRL) in a more computationally efficient manner. Using a sparse approximation of the Jacobian, the authors show how they can reduce the computational costs of RTRL applied to sparse recurrent networks in a manner that introduces some bias, but which manages to preserve good performance on a variety of tasks.\\n\\nThe reviewers all agreed that the paper was interesting, and all four reviewers provided very thorough reviews with constructive criticisms. The authors made a very strong effort to attend to all of the reviewers' comments, and as a result, some scores were adjusted upward. By the end, all reviewers had provided scores above the acceptance threshold.\\n\\nIn the AC's opinion, this paper is of real interest to the community and may help to develop new approaches to training RNNs at large-scale. As such, the AC believes that it should be accepted and considered for a spotlight.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thanks for answering my follow up questions and the latest changes to the manuscript.\\nI modified my score to support accepting your work.\"}",
"{\"title\": \"Official reply to follow-up questions.\", \"comment\": \"Follow-up questions addressed point-by-point.\\n\\n> What are the results of TBPTT when stateful training is applied and how do they compare in such case? \\n\\nApologies for the lack of clarity here, whenever we truncate we do \\u201cstateful training\\u201d, i.e. we pass the RNN state forward even though we have done a weight update in the interim. We have updated section 2.2 of our paper to be explicit that all \\u201ctruncated BPTT\\u201d experiments in our paper are \\u201cstateful\\u201d in the sense that they pass forward the RNN state across truncation boundaries.\\n\\n> Also, the copy task has been solved with fewer neurons in previous works\\n\\nThe big difference in our work is that these networks are highly sparse, so the number of neurons is not an apples-to-apples comparison of network capacity. This also depends on the version of the copy task in question. It seems there is some confusion -- please see our response below as to why the tasks in the \\u201cCan recurrent networks warp time\\u201d paper are not equivalent to the copy task used here.\\n\\n> My suggestion was to specify in the text what falls under the problems that can be solved with SnAp. If the same problems as RTRL can be solved, then a simple sentence or two stating it should suffice.\\n\\nThanks, we agree this could be made more explicit and have added a sentence to this effect in the introduction.\\n\\n> Are the authors implementing the \\\"repeat copy task\\\" [1, section 4.2] or the \\\"copy task\\\" [1, section 4.1]? The title of section 5.2 reads \\\"Copy Task\\\" and there is no mentioning about the repetitions in the text.\\n\\nGood catch, it is indeed \\u201cCopy\\u201d not \\u201cRepeat Copy\\u201d so we should indeed have referred you to section 4.1, not section 4.2 in the previous reply. As in Mujika et al [1], the sequence is only copied once, there are no repetitions. Sequences are indeed length 2L + 3.\\n\\n> What is the y-axis showing in Figure 4? Is the maximum L achieved?\\n\\nYes exactly, an (x, y) point in that plot shows the level of L=y achieved after data_time=x training tokens have been processed.\\n\\n> Is stateful or stateless training used for TBPTT with T=1 in the copy task? And in the character-level language modeling task?\\n\\nStateful a.k.a \\u201ctruncated\\u201d BPTT. As described in section 2.2 (updated). In the language modelling experiments we always do full backpropagation through time on the whole sequence, which is why BPTT is a \\u201cgold standard\\u201d upper limit on performance.\\n\\n> For instance, [2] discuss a copy task for sequences with 1000 elements with an LSTM.\\n\\nThe copy task in that paper is much different from the one in our paper and the ones we have been discussing in [1, 2]. Rather than copying length 1000 sequences, that work is copying length-10 sequences after a length-1000 gap of noise inputs. Copying a length-1000 sequence requires storing much more information than does waiting 1000 steps to copy a length-10 sequence (to see this, note that only 10 bits need to be stored to copy a length-10 bit sequence, but 1000 bits need to be stored to copy a length-1000 bit sequence. Counting the number of steps requires a logarithmic, not linear number of bits so it\\u2019s strictly easier).\\n\\nFurthermore, rather than training with a curriculum and bumping the sequence length when some proportion of predictions are 100% correct, they always train an LSTM on length ~1000 sequences and report the error in terms of negative-log-likelihood. \\n\\n> SnAp seems to be learning temporal patterns, however, can you explain why are these long-term dependencies?\", \"i_think_this_is_a_very_insightful_point\": \"what is long-term? Upon reflection, we agree that what counts as \\u201clong\\u201d is quite subjective, as 128 steps really isn\\u2019t so long for some datasets but long for others. We have replaced some \\\"long-term\\\" phrasing when talking about temporal dependencies.\\n\\nAs we mention in the paper, one direction we are quite excited about is trying out SnAp with more recent architectures involving self-attention or sparsely accessed memory (e.g. [2, 4]) that are better at taking advantage of long contexts [3, Figure 7] (c.f. [4, Fig 4 (b)]). We have left that for future work.\\n\\n[1] Mujika et al. Approximating real-time recurrent learning with random kronecker factors. NIPS '18, pp. 6594\\u20136603.\\n\\n[2] Graves et al. Neural Turing machines. Preprint at http://arxiv.org/abs/1410.5401 (2014).\\n\\n[3] Jared Kaplan et al. Scaling laws for neural language models, 2020. https://arxiv.org/abs/2001.08361\\n\\n[4] Jack W Rae et al. Scaling memory-augmented neural networks with sparse reads and writes. NIPS\\u201916, pp. 3628\\u20133636.\"}",
"{\"title\": \"Official reply to missed point in first review.\", \"comment\": \"Thanks very much to reviewer #4 for taking the time to read our reply thoroughly, double-check the references, and give us a thoughtful response. We have addressed your comments in a new revision and uploaded it to OpenReview.\\n\\nFirst, we noticed that we missed a point in your initial review.\\n\\n> When n is big, the experimental results show a better and competitive performance to BPTT. However, in such cases (like Snap-3) the computational cost becomes very expensive compared to BPTT by at least 2 orders of magnitude, and the matrices become more dense.\\n\\nWe agree that the case for using SnAp-N in practice becomes diminished as N grows large (e.g. 3 or greater), because the costs become comparable to full RTRL and therefore much more costly than BPTT except in the regime of extremely high sparsity.\\n\\nFor SnAp-2, the asymptotic numbers in Table 1 show that in theory the computation costs can be reduced to the level of BPTT (or lower) by making the network sparse enough. Quoting section 3.3, the costs become comparable to BPTT if \\u201cif the sparsity of the RNN is increased so that $d < n^{\\\\frac{-2}{3}}$ , e.g. 99% or higher sparsity for a 1000-unit Vanilla RNN.\\u201d Snap-2 does do quite well in e.g. language modelling (Figure 2) and Copy (Figure 4), outperforming BPTT for training sparse LSTMs in terms of learning speed.\\n\\nAppendix B was included to motivate the sparsity level/network size we\\u2019d need to fully realize a performance win in practice, but we haven\\u2019t yet managed to scale SnAp to this regime.\"}",
"{\"title\": \"Follow up questions\", \"comment\": \"### Follow up Questions:\\nWould like to thank the authors for taking time to answer my previous questions.\\n\\nFirst, I would like to clarify that I agree with other reviewers and the authors that *the ideas* in this work are a current research topic in this community and do have merit. Please, keep in mind that the whole content of your work is reviewed, not only ideas. \\n\\n> Reviewer #4 seeks clarity on when we should use SnAp instead of BPTT/RTRL in practice. This paper isn\\u2019t about a new method that should be used in production immediately, but the same is true of much (all?) of the literature on making Real Time Recurrent Learning efficient.\\n\\nMy suggestion was to specify **in the text** what falls under the problems that can be solved with SnAp. If the same problems as RTRL can be solved, then a simple sentence or two stating it should suffice.\\n\\n> The most important criticism from Reviewer #4 is that we haven\\u2019t demonstrated the capacity for learning long-term temporal patterns. We respectfully disagree: the Copy task experiments show exactly the length of time over which SnAp can capture temporal structure when training a wide array of reasonably-sized recurrent networks (Vanilla, GRU, and LSTM).\\n\\nSnAp seems to be learning temporal patterns, however, can you explain why are these **long-term** dependencies? \\nSection 5.1 trains with sequences in language modeling of $128$ characters long. The copy task in section 5.2, seems to contain much less elements for the longest sequences. Assuming Mujika et al. setup with $2L+3$ for a sequence length, with max length of $\\\\sim 63$ elements. For instance, [2] discuss a copy task for sequences with 1000 elements with an LSTM.\\n\\n> We agree that the description has become quite terse due to space limits but we have added more detail in the latest revision: thanks for the suggestion! For a more thorough description of the task, please see [1, section 4.2].\\n\\nAre the authors implementing the \\\"repeat copy task\\\" [1, section 4.2] or the \\\"copy task\\\" [1, section 4.1]? The title of section 5.2 reads \\\"Copy Task\\\" and there is no mentioning about the repetitions in the text.\\n\\n* What is the y-axis showing in Figure 4? Is the maximum $L$ achieved? \\n* Is stateful or stateless training used for TBPTT with $T = 1$ in the copy task? And in the character-level language modeling task?\\n\\nI will be happy to modify my score, if the questions above are answered and the paper improved accordingly. \\nIf submitted, happy to look at code for clarifications instead.\\n\\n### References:\\n\\n[1] Graves, A., Wayne, G. & Danihelka, I. Neural Turing machines. Preprint at http://arxiv.org/abs/1410.5401 (2014)\\n\\n[2] Tallec C. and Ollivier Y. Can Recurrent Neural Networks warp time?. ICLR 2018.\"}",
"{\"title\": \"Official Reply\", \"comment\": \"We thank reviewer #1 for their helpful review.\\n\\nThe Adam parameters are the default of the framework we used and were not tuned for any of the techniques. There is a ReLU non-linearity in between the 1024-unit MLP and the 256-unit softmax. $\\\\theta$ is initialized using a truncated normal distribution whose std. deviation is the inverse of the square root of the fan-in. The embedding matrix is not shared between the input and the output.\\n\\nWe note that the plots for the copy task (figure 4) do show the variance across 3 runs (the variance is quite small for many of the runs and barely visible).\\n\\nThis learning rate sweep came from [1]. We have now extended it to also include $10^{-2.5}$ and still find that $10^{-3}$ is the best in almost all cases. Higher-orders of SnAp (SnAp-2 and SnAp-3) perform better with this larger learning rate for a very small number of configurations, but for simplicity and at a slight disadvantage to SnAp we will continue to report results for all methods with $10^{-3}$. The differences in performance between the methods are large and we do not believe that methodology (1) or (2) would lead to any different conclusions.\\n\\nCurrently we use the simple strategy of picking a sparsity pattern by selecting edges uniformly at random. There are more sophisticated strategies that are not immediately compatible with RTRL, such as RigL [2], but we believe that combining these sparse training techniques with RTRL is future work.\\n\\n> In page two, the paper introduces a function $g_{\\\\phi}$ for mapping the state to the target and then says that the goal of the learner is to minimize loss with respect to $\\\\theta.$ Shouldn't the loss be minimized with respect to both $\\\\theta$ and $\\\\phi$? Or are the authors implying that the readout layer is fixed? (Or perhaps $\\\\phi$ is a subset of $\\\\theta$?)\\n\\nWe agree it is slightly confusing that the exposition does not comment on the learning of readout parameters $\\\\phi$. We do train the parameters of the readout, but we separated them out for the purposes of the exposition because the core issue is gradient computation for parameters which affect recurrent state. The gradients w.r.t $\\\\phi$ can always be accumulated forward in time without special consideration for BPTT versus RTRL. As you can see the code snippet in our reply to reviewer 2, we use backpropagation to compute the readout gradients and can simply forget any readout activations used in previous timesteps without any approximation being made.\\n\\n[1] Asier Mujika, Florian Meier, and Angelika Steger. Approximating real-time recurrent learning with random kronecker factors. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (eds.), Advances in Neural Information Processing Systems 31, pp. 6594\\u20136603. 2018.\\n\\n[2] Evci, U., Gale, T., Menick, J., Castro, P. S., and Elsen, E. (2020). Rigging the lottery: Making all tickets winners. In Proceedings of the 37th International Conference on Machine Learning.\"}",
"{\"title\": \"Official Reply\", \"comment\": \"We thank reviewer #2 for their very thorough review.\\n \\nThanks for making it clear that we did not describe the differences between SnAp-1 and RFLO clearly enough. First, we would like to note that we did not use random feedback in our implementation of RFLO: we used exact feedback with the structure of the updates proposed by RFLO. We will clarify this in the paper.\\n\\nWe believe that even in the case of leaky RNNs RFLO and SnAp-1 are different. To show this we take equation 22 from section 3.6 of [1], and rewrite it in the notation used in our paper in equation 3 (which we also rewrite for a leaky RNN). Here $i$ is defined as $i=u(j)$ as it is in our paper.\\n\\nSnAp-1 = $(J_t)_\\\\{ij\\\\} = \\\\alpha(I_t)_\\\\{ij\\\\} + (\\\\alpha D_t + (1 - \\\\alpha))_\\\\{ii\\\\}(J_\\\\{t-1\\\\})_\\\\{ij\\\\}$\\n\\nRFLO = $(J_t)_\\\\{ij\\\\} = \\\\alpha(I_t)_\\\\{ij\\\\} + (1 - \\\\alpha)(J_\\\\{t-1\\\\})_\\\\{ij\\\\}$\\n\\nThe key difference being that the previous Jacobian is multiplied only by the constant $(1 - \\\\alpha)$ in the case of RFLO, whereas in SnAp-1 it will have a dependence on $D_t$, even for a leaky RNN. Also - we quote from appendix 1 of the RFLO paper [2] \\u201cSecond, we simply drop the terms involving \\ud835\\udc16 in Equation (11), so that nonlocal information about all recurrent weights in the network is no longer required to update a particular synaptic weight.\\u201d\\n\\nThis is an important point to get right and we would appreciate the reviewer feedback on this analysis. Once we are in agreement, we will add this clarification to the paper.\\n\\nThank you for the reference to the e-prop paper, it is indeed exciting work. After reviewing it, we agree that eprop-1 and SnAp-1 are indeed essentially describing the same idea, although with very different expositions. It is unfortunate that the -1 postfix has a different meaning in the two names. We will update the paper appropriately to reflect this prior work and emphasize our different exposition.\\n\\nEprop-3 is a clever way to combine RTRL and synthetic gradients with BPTT. RTRL allows for information to be carried from the past into the truncation window and synthetic gradients allow for a better estimate of the gradient from ahead of the truncation window. Using eprop-1 / SnAp-1 makes it possible to compute the RTRL component with a similar time complexity to BPTT. Eprop-3 seems well suited to the not fully online setting, whereas Snap-2/3 are still suitable in a fully online setting. In the offline setting it might be interesting to consider using SnAp-2 to replace eprop-1 / SnAp-1 to allow for more information carried from the past.\", \"addressing_technical_details\": \"We will add some details to the appendix describing how these values are obtained. The form A + B is indeed where A is the cost of going forward and B is the cost of the backward calculation in BPTT or the influence matrix update in RTRL variants.\\nWe will fix the figure to make this clearer. The solid lines are when updates are done online (i.e. T = 1) and the dashed lines are when updates are only done at the end of each episode.\\n\\nYes, we distinguish between $k^2$ and $p$ because while for a vanilla RNN, they are the same, they might not be in architectures such as GRU and LSTM. It is confusing that $p$ refers to always the dense parameter size here, we will make this explicit.\\nIn this section in the appendix we use weight magnitude during training with BPTT, there is no retraining with SnAp. \\n\\nThis example is meant to be motivating - but achieving this level of performance with SnAp would also require a sparse training method such as RigL [3] that is compatible with SnAp, which we believe is beyond the scope of this paper.\\n\\nWe used JAX and indeed we found it quite nice for this type of research. Whilst we can\\u2019t do a full code release at this time, we can attach a working, simplified snippet showing how we implemented SnAp with JAX. https://drive.google.com/file/d/1OcGSdsfKkIg9uibqblRwGJ5E5NxEdES4/view?usp=sharing\\n\\n\\n[1] Owen Marschall and Kyunghyun Cho and Cristina Savin, \\u201cA Unified Framework of Online Learning Algorithms for Training Recurrent Neural Networks\\u201d, Journal of Machine Learning Research, 2020, http://jmlr.org/papers/v21/19-562.html\\n\\n[2] James M Murray. Local online learning in recurrent networks with random feedback. eLife, 8:e43299, 2019.\\n\\n[3] Evci, U., Gale, T., Menick, J., Castro, P. S., and Elsen, E. (2020). Rigging the lottery: Making all tickets winners. In Proceedings of the 37th International Conference on Machine Learning.\"}",
"{\"title\": \"Official Reply\", \"comment\": \"We thank the reviewer for their comments and want to help clarify some of the confusion.\\n\\nChanging the size of a network changes the capacity of a network. Choosing the right size of a dense network for a given task is often a matter of trial and error. Sparsity also changes the capacity of a model, and the right combination of size and sparsity generally must also be determined with trial and error. Previous work, for example, Efficient Neural Audio Synthesis, has shown that for RNNs efficiency increases as the sparsity level increases, up to at least ~99%, which is also what we observe on language modeling in this paper.\\n\\n> In appendix authors talk about \\u201cmodeling performance of the two variants of GRU\\u2026.what is the relationship between sparsity and two variants?\\n\\nThe key is that variant 2 avoids \\u201cthe composition of parameterized linear maps within a single RNN step\\u201d. If we remove the non-linearities for clarity we can see that variant 1 involves a product of two parameter matrices W_ha and W_ir. In variant 2, there is no matrix product of parameter matrices. This is the key difference. And note that this specifically refers to the sparsity of the jacobian of the state w.r.t. the parameters.\\n\\n> How does sparsity measure is introduced in this work? Does model stay consistent whenever regularization approaches such as zoneout or dropout are used or introduced into the model?\\n\\nWe have not experimented with other forms of regularization like dropout and zoneout because very sparse networks are already regularized due to the removal of a large proportion of their parameters. We encourage the reviewer to see Figure 4 where we try many different sparsity levels for a few different RNN variants. Investigating the combination of dropout or zoneout in conjunction with sparse parameters seems like an interesting idea, but is orthogonal to focus of the paper on RTRL.\\n\\n> Authors states that \\u201cIn order to induce sparsity, we generate a sparsity pattern uniformly at random and fix it throughout training\\u201d What is the range for random uniform?...\\n\\nWe agree this could be made more explicit and have done so in the updated draft. When we say random uniform, we mean a uniform choice over which (set of) parameters are removed. Alternatively, we could say that an independent bernoulli choice is made for each parameter, whether it should be kept or removed (forced to zero). The weights themselves are initialized from the same distribution that would be used for a dense model.\\n\\n> what modification is introduced on snap-1 beside training it on GRU?\\n\\nAs discussed in Section 4 (Related Work), We have defined SnAp-1 in general terms for any recurrent architecture, but it ends up being quite similar to the algorithm used to train LSTM in the original Hochreiter & Schmidhuber 1997 paper. The exact details of that algorithm are pretty hard to discern from the paper\\u2019s exposition but it\\u2019s clearly similar in flavour (tracking the influence of a parameter within a small subset of units). We have explained our version of the idea simply and formally, characterised its computational properties versus alternative algorithms in the literature, extended the idea to any recurrent architecture and generalized it to an n-step instead of 1-step approximation.\\n\\n> It is important to show speed (with various sparsity, convergence plots or else these variants would have similar performance and memory requirement compared with vanilla RTRL\\n\\nWe\\u2019d refer you to table 2, where FLOPs serve as a proxy for speed and we show FLOPs cost as a multiple of BPTT or RTRL cost. We note that prior published papers on RTRL approximations have not been competitive on a wall-clock basis with BPTT. Our implementation takes advantage of some of the memory and compute savings that are possible with SnAp, but not all of them. Current popular deep learning frameworks make it difficult to take full advantage of sparse linear algebra, but we hope that work such as ours will spur development in this area.\"}",
"{\"title\": \"Official Reply\", \"comment\": \"We thank reviewer #4 for their comments.\\n\\nReviewer #4 seeks clarity on when we should use SnAp instead of BPTT/RTRL in practice. This paper isn\\u2019t about a new method that should be used in production immediately, but the same is true of much (all?) of the literature on making Real Time Recurrent Learning efficient. And yet ICLR has served as one of the leading venues for this line of research in recent years [see e.g.: https://openreview.net/forum?id=rJQDjk-0b, https://openreview.net/forum?id=ryGfnoC5KQ].\\n\\nIndeed, as the self-identified expert reviewers for our paper have commented, this is basic research into an important and active research topic in the Deep Learning community at large, (as well as in the neuroscience community): the development of temporal learning algorithms that are scalable to long sequences, large networks, and fully online learning. We agree with these reviewers (#1 and #2) that our paper has strong merit as a contribution to this fruitful line of research.\\n\\nThe most important criticism from Reviewer #4 is that we haven\\u2019t demonstrated the capacity for learning long-term temporal patterns. We respectfully disagree: the Copy task experiments show exactly the length of time over which SnAp can capture temporal structure when training a wide array of reasonably-sized recurrent networks (Vanilla, GRU, and LSTM).\\n\\nWe agree that it would be fruitful for future work to extend these investigations, as well as the Language Modelling ones to longer sequences. In our view this is a matter of scaling up the method to larger networks, because the ability of a recurrent network to capture long term temporal structure can be bottlenecked by the architecture size in terms of hidden units and the number of parameters, as well as the learning algorithm.\\n\\nIn the paper\\u2019s Conclusion, we have clearly set out the software and hardware barriers to scaling up our methods further and believe that our experiments are a respectable first step. We can push further once the aforementioned engineering challenges are ameliorated.\\n\\nBelow we resolve smaller matters which this reviewer wanted clarified, one-by-one.\\n\\n> The relationship between neurons can vary every time the parameters are updated. Is it assumed to be fixed over the entire training?... Also, complexities for Snap in table 1 don\\u2019t seem to consider the computational time of computing the sparsity pattern (even for a random pattern).\\n\\nIn our experiments, the sparsity pattern is chosen uniformly and then fixed throughout training, as explained in the final sentence of section 2.3. This strategy is simple and compatible with our optimizations to RTRL (section 3.2). The costs of picking a random sparsity pattern end up being negligible compared to the training run just like other computation done for network initialization. There is an active literature investigating methods for adapting the sparsity pattern and it is far from a solved problem. Combining dynamic sparsity patterns [SET, RigL, etc.] with our work would be an interesting future direction.\\n\\n> The Copy task details are unclear\\u2026\", \"we_agree_that_the_description_has_become_quite_terse_due_to_space_limits_but_we_have_added_more_detail_in_the_latest_revision\": \"thanks for the suggestion! For a more thorough description of the task, please see [1, section 4.2].\\n\\n> Minor issues\\u2026\\n\\nThanks also for catching these, we will update the manuscript.\\n\\n[1] Graves, A., Wayne, G. & Danihelka, I. Neural Turing machines. Preprint at http://arxiv.org/abs/1410.5401 (2014).\"}",
"{\"title\": \"A promising study (second review: even better than I thought!)\", \"review\": \"## Second review\\n\\nThanks for taking all my comments seriously. After clarification of the difference with RFLO I see that this work is even richer than I thought and I increase my grade to 8. It seems that other reviewers did not appreciate that training a network without back-prop requires nontrivial engineering and theoretical considerations which are well described in this paper, I truly think it is a pity if this work is not accepted.\\n\\nI fully agree with the difference between RFLO and Snap-1 that you describe in your reply, and I think it would be really great to put that somewhere in the paper. As you suggest it would be great to explain that you did not use random feedback weights for RFLO.\\n\\nThis would also be a great opportunity to explain how did you extend RFLO to a GRU network in Figure 3. I find it a bit puzzling, that RFLO appears worse than an untrained network in Figure 3 (even early in training as in seen in Figure 3.B). Is there any additional difference in the network model for these two baselines, like one is using leaky RNN and the other one GRU or something like that?\\n\\nI find the piece of JAX code incredibly rich. It would be great to publish that along with the paper! JAX is not yet very well spread, and we see here that it is a very promising tool for custom gradient in RNNs.\\n\\n## Summary\\n\\nThe authors describe new algorithms to train sparse recurrent neural networks, these algorithms are described as variants of RTRL. These methods, called SNAP-$n$, use the same induction as in RTRL but approximate the true Jacobian matrix $J$ by a sparse matrix where each coefficient is set to zero if the corresponding parameter does not influence the corresponding state variable within $n$ steps. These alternatives to BPTT alleviate the memory consumption growing otherwise linearly with the sequence length. \\n\\nA theoretical complexity analysis and simulation experiments are carried out. The simulations are performed on the character prediction task and a synthetic copy task. The authors report that the network reaches performance comparable to BPTT and sometimes better (snap-3 leads to better copy task performance with GRU, snap-2 seems already better with LSTMs, however it requires many more FLOPS).\\n\\n## General opinion\\n\\nCongratulations to the authors. This is an important topic since recurrent networks are not efficiently trained with BPTT. More intensive and rigorous research are needed to find suitable alternatives. The SNAP idea is simple and appealing, and the results encouraging (even though it still requires a large number of FLOPS).\\n\\nI recomputed rather carefully the complexity analysis for sparse RTRL, snap 1 and snap 2 and arrived at similar results. Theoretical results seem to be correct and the experiments are credible. \\n\\n## Requires clarification\\n\\nOne negative comment is that snap-1 has been published before in other forms as explained below The writing makes it sounds more novel that it actually is and some comparisons with other algorithms are unfair. It would be great to correct the shot but snap deserves publication anyway since it also provides a rigorous analysis of the full snap family and snap-1 had not been explored in such details in the interesting context of sparse networks.\\n\\nRFLO for leaky RNNs is basically exactly snap-1 with the additional burden of carrying random feedbacks. It is written three times that snap-1 is better than RFLO but it would be great to comment on the differences and explain where is the difference of performance coming form. The random feedback was introduced in RFLO to avoid the transport problem for biological plausibility. It does not seem relevant to do this extra approximation step if RFLO is used for performance and not plausibility. If the random feedback is the main difference, one should clearly say that the algorithm are otherwise identical. If the authors see other differences, it would be great to indicate what they are.\\n\\nThis very same algorithm (snap-1, RFLO) was also published under the name e-prop 1 [a] although the theory was derived differently. The authors had shown that e-prop 1 (aka snap-1) works well on the copy task, a word prediction task with LSTMs (Figure 4 in [a]), ATARI games and TIMIT (more recent work). The authors had also suggested an amelioration called e-prop 3 that improved the performance and kept the same time complexity as BPTT unlike snap-2 and snap-3. Maybe it is relevant to comment on the relationship between snap and this paper [a] ?\\n\\nIn case the authors were not aware of this, an other interesting approximation to RTRL was published in [b], the authors may or may not comment about that too.\", \"there_are_technical_details_that_should_be_given_for_clarification\": [\"The authors might want to provide details on the computation of the complexity of one or two of the essential component of the table to make it more accessible to the reader. If I am not mistaken I think that the complexity results are true \\\"up-to a proportionality constant\\\" for a general RNN model, maybe it would be great to write that in the caption for instance. The complexity is written in the table in the form A + B, I understood than A is the complexity in the forward pass and B is the complexity in the backward pass, but maybe it should be explained. Maybe there is also an easy way to confirm those numbers with the empirical number of FLOPS given later in the paper ?\", \"In Figure 4, I cannot find out what the colored dash lines are meant to be. This caption is rather short and there is an opportunity to add some information: for instance what is \\\"curricula max\\\".\", \"In Figure 2, the author use k^2 by they have also introduce the letter p. Is that not meant to be the same thing? It probably depends on the RNN model? Before doing the calculation of the complexity myself I did not know whether p included already the coefficient d, I do not think that it obvious that zeroed coefficients are still considered as \\\"parameters\\\" in the \\\"number of parameters\\\" p. Maybe this can be said in the caption?\", \"If I understand correctly the pruning method in appendix B, the network is first trained until convergence with BPTT and then, the best architecture is fixed to be retrained with snap? It would be great to clarify this paragraph because it is not easy to read. If that's the case there could be a substantial transfer of information by passing on the \\\"winning\\\" architecture from BPTT to the SNAP training, in particular for very sparse networks. Is that really necessary? How much would the performance decrease? As a control, I would guess that for something like 75% sparsity the network can be trained from a random matrix. Also I do think that there are now simple methods available for training much sparser networks from scratch, and it would not require this pre-BPTT step.\", \"It would be so great to have some details about the implementation: what software did you use to perform this \\\"remake\\\" of a forward propagation? Did you have to implement a custom C++/cuda code, maybe use JAX ? Did you use sparse matrix cuda kernel, how good was it? Maybe the code will be shared, if not any details are welcome.\", \"[a] Biologically inspired alternatives to backpropagation through time for learning in recurrent neural nets\", \"Guillaume Bellec, Franz Scherr, Elias Hajek, Darjan Salaj, Robert Legenstein, Wolfgang Maass\"], \"https\": \"//openreview.net/forum?id=ryGfnoC5KQ\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"iterative work on adding sparsity to RNNs, need more clarification\", \"review\": \"## Second Review\\nThe author's thoughtful response has clarified most of the missing details in the paper. It is true that idea is interesting and theoretical analysis are promising. However, I still have issues understanding failure conditions. If the method is purely based on trial and error to determine optimal threshold for sparsity, then it requires many engineering tricks. Thus I would request authors to provide such details, such that young researchers can extend this work to create better training paradigm for RNNs. Other issue as pointed out by other reviewers is using only 3 runs to report results. I really appreciate explanation about difference between 2 variants of GRU and how it adds sparsity to the model. Additionally JAX code helped in understanding many key apsects of the work. I would encourage authors to make it public, and provide key insights for training RNNs using snap. Nonetheless, the proposed method and theoretical analysis are insightful and I believe this to be a first step towards building scalable RNNs which efficiently gets rid of credit assignment issue. This paper does add significant pedagogical value, which can benefit complex task such as grammatical inference. I'm increasing my score from 5 to 7 and I hope this paper is accepted. \\n\\n## Summary\\nPaper introduces snap which adds sparsity to the influence matrix extracted from RTRL which acts as a practical approximation for RTRL, snap is extension of prior work on snap-1 used to train LSTM, and authors have shown that one can train dense as well as sparse RNNs using snap achieving similar performance as BPTT on real as well as synthetic dataset. Few clarifications in terms of snap working and few key information w.r.t to parameters are missing.\\n\\n## Clarification\\n\\nHow does one evaluate level of sparsity required for any given task? At what n (sparsity ratio) optimal performance is observed which leads to better performance. It is well known that full RTRL (forward propagation helps compared with backward propagation), especially in case of continual or online learning [Ororbia and Mali 2020] and copy task (KF-RTRL, UORO). Does current sparsity measure work on variant of RTRL? And how do you determine top k to select k elements for creating a sparse matrix? is it random like 70-80-99? Many key details are missing, and authors are requested to provide more information to better understand model flow.\\n\\nIn appendix authors talk about \\u201cmodeling performance of the two variants of GRU. which has been shown to be largely the same, but the second variant is faster and results in sparser D_t and I_t\\u201d. I am confused, what is the relationship between sparsity and two variants? Please provide some numbers explaining how sparsity is increased by moving reset gate after the matrix multiplication.\\n\\nHow does sparsity measure is introduced in this work? Does model stay consistent whenever regularization approaches such as zoneout or dropout are used or introduced into the model? Do you observe the similar performance? Does network roughly converge to similar performance with optimal sparsity or sparsity measure changes as other regularization approaches are introduced? Did you do grid search for language modelling task or copy task (beside learning rate)? If so please provide details? Citation and comparison missing with Sparse attentive backtracking, which in theory can work with sparse networks and its temporal credit assignment mechanism can help in introducing sparsity [ke and goyal 2019]. \\n\\nAuthors states that \\u201cIn order to induce sparsity, we generate a sparsity pattern uniformly at random and fix it throughout training\\u201d What is the range for random uniform? Is model sensitive whenever sparsity pattern is changed while training (may be per epoch or k epochs). How can one ensure that the sparsity pattern at start is the optimal one for any network? Does similar pattern work for all GRU, LSTM, RNN or one needs to adapt scheme based on architecture?\\n\\nAdvantage of snap-2 and 3 over snap-1, snap-1 is similar to (Hochreiter & Schmidhuber, 97) work on training LSTM, what modification is introduced on snap-1 beside training it on GRU? And sparse networks. It is still unclear what advantage these 3 variants add. It is important to show speed (with various sparsity, convergence plots or else these variants would have similar performance and memory requirement compared with vanilla RTRL\\n\\n\\n[Ke and Goyal 2018] Ke, N.R., GOYAL, A.G.A.P., Bilaniuk, O., Binas, J., Mozer, M.C., Pal, C. and Bengio, Y., 2018. Sparse attentive backtracking: Temporal credit assignment through reminding. In Advances in neural information processing systems (pp. 7640-7651).\\n\\n[Ororbia and Mali 2020] Ororbia, A., Mali, A., Giles, C.L. and Kifer, D., 2020. Continual learning of recurrent neural networks by locally aligning distributed representations. IEEE Transactions on Neural Networks and Learning Systems\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Nice paper; experiment protocol needs improvement\", \"review\": \"## Post response update\\nThe author's response has clarified most of the missing details in the paper. I still have an issue with reporting results for 3 runs --- even if the variance is small for 3 runs, that does not imply that there won't be outliers when one does more runs. Nonetheless, the proposed method is insightful and the paper has significant pedagogical value. I'm moving my score from 6 to 7 and I hope the paper is accepted. \\n\\n## Summary\\nThe paper tackles the structural credit-assignment problem for a recurrent network when parameter values in earlier time-steps can impact the prediction in the future. The most common approach to achieve this structural credit-assignment, in the current deep learning literature, is BPTT. BPTT, however, is not suitable for online learning. First, the computation and memory requirements of BPTT grow with the length of the sequence. Second, BPTT does not spread computation uniformly --- all the computation happens at the end of an episode. This is not suitable for an online learner that has to learn and react in real-time. An alternative to BPTT is RTRL. The computation and memory needs of RTRL is distributed uniformly across time-steps and RTRL makes it possible to learn at every step, but it requires an intractable amount of memory and compute for even modestly sized networks. \\n\\nThe limitation of both BPTT and RTRL necessities research on new algorithms. Ideally, we want algorithms that spread the computation uniformly and are still scalable. SnAp, the methods proposed in this paper, is one such algorithm. The general idea behind SnAp is to approximate the gradient by only taking into account the impact of a parameter $w_j$ on an activation $h_i$ only if $w_j$ influences $h_i$ with-in n steps. A similar algorithm was used in the original LSTM paper that was equivalent to SnAp with n=1 for the LSTM architecture. This paper generalizes the algorithm used in the original LSTM paper in two dimensions. First, it generalizes it to methods beyond the specific LSTM architecture, and second, it can keep track of influence of parameters across $n$ steps instead of just 1. While the cost of SnAp increases quickly as $n$ increases, the authors propose a promising direction for keeping the cost down. They argue and show that for highly sparse RNNs, SnAp can be scaled to $n > 1$. \\n## Review \\n\\n### Strengths \\nThe paper is well written. It summarizes the prior work concisely and explains the two views of computing the gradient for an RNN --- the recursive view used by RTRL and the unrolling view used by BPTT --- clearly. The need for sparsity in RNNs is well-motivated and the observation that sparsity in RNN slows down the propagation of influence of a parameter on a state is interesting. The new algorithm, SnAp, is clearly presented as an approximation to RTRL. The paper also does not make unsubstantiated claims and explains the merits and limitations of the proposed method clearly. Overall, I'm highly impressed by the quality of the paper and the merits of the idea. \\n\\n\\n### Weaknesses \\nWhile the paper excels in writing quality and the proposed method is sound, the empirical evaluation of the method has several issues. First, it's not clear how the hyper-parameters for all the methods were tuned. Were the parameters tuned for SnAp and inherited for other methods? Were they tuned independently? The paper mentions that it used $\\\\beta_1=0.9$ and $\\\\beta_2=0.999$ for the Adam Optimizer without explaining how they were chosen. \\n\\nMany details of the experiment setup are not fully specified. For example, in page 6 the authors mentioned that they use a one-layer MLP to get 1024 hidden units which are mapped to 256 unit software, but do not clarify if a non-linearity is applied to the 1024 units. The paper also omits how $\\\\theta$ is initialized. \\n\\nThe experimental results have no error margins, and are the mean of only 3 runs. Ideally, authors should repeat the experiments for over 20 runs and report the standard error of the mean. Even if they are limited by available compute and are unable to do many runs, they should at-least report the standard-error for however runs they do (Note that standard error computation is biased for few runs and it might be a good idea to apply bias correction. More details here: https://en.wikipedia.org/wiki/Standard_error). \\n\\nParameters sweeps should be extended if the optimal parameter is at the edge. The authors report that they tried $10^{-3}, 10^{-3.5},$ and $10^{-4}$ and found $10^{-3}$ to be best. However, all this tells me is that a higher learning rate could have been even better. They should include larger learning rates in the sweep for the experiment. \\n\\nIt's not clear if the authors are (1) finding the best learning rate and then re-running for 3 seeds for the best learning rate or (2) simply running the experiments for 3 different seeds for all learning rates and reporting the best results. The former is a sound strategy whereas the latter suffers from over-estimation bias. \\n\\nGiven the issues in the experiment methodology, I'm giving the paper a weak accept for now even though I think that the paper excels in many ways. The issues identified can easily be fixed during the discussion period and I would be more than happy to change my score to an accept or a strong accept after a revision that fixes the experimental issues. \\n\\n\\n### Questions \\n1. In page two, the paper introduces a function $g_{\\\\phi}$ for mapping the state to the target and then says that the goal of the learner is to minimize loss with respect to $\\\\theta.$ Shouldn't the loss be minimized with respect to both $\\\\theta$ and $\\\\phi$? Or are the authors implying that the readout layer is fixed? (Or perhaps $\\\\phi$ is a subset of $\\\\theta$?)\\n\\n2. It's not clear to me from the write-up how the sparsity pattern is choosen empirically. Is the idea to run the RNN for n-steps, empirically observe enteries in $J_n$ that are zero, and fix those enteries to be zero for all future steps? If yes, could the initial weights of the RNN and the data make an entry in $J_n$ to be zero even if it would not have been zero for a different initialization and data-stream?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official review #4\", \"review\": \"This work presents a method, named SnAp, that takes Real-time Recurrent Learning derivations and proposes to approximate its computations with sparse approximations to make them more computational tractable. The method is an alternative to overcome the truncation in backpropagation through time (BPTT) over long term temporal structure. The method assumes a sparsity pattern on the parameters of the network that leads to the relationship on how the gradients could be updated. Finally, the method is evaluated against BPTT, UORO and RTRL on character-level language modeling and a copy task.\\n\\n=================\\n\\nThe method is simple and a complexity analysis has been included. The experimental section seems limited in only showing a performance comparison with other methods. A better analysis in aspects of the method (like capacity for learning long-term temporal patterns) is lacking.\\n\\n==================\\n\\nWhen does the Snap/RTRL is not applicable? Are there cases where BPTT is applicable and Snap/RTRL is not? It would be nice to clarify such cases so readers can understand when Snap, RTRL, or BPTT are the right or better solutions. \\n\\nThe motivation behind the method is to overcome the limitation of the truncation behind BPTT for learning long term temporal structure. However this seems to be evaluated with very short sequences overall (128 for language modeling and maybe less than that for the copy task, the length is not present). How well the Snap method would work when training for sequences of thousands of elements, where BPTT is well known to struggle?\\n\\nThe relationship between neurons can vary every time the parameters are updated. Is it assumed to be fixed over the entire training? What would happen if the pattern is updated every few gradient updates? Also, complexities for Snap in table 1 don\\u2019t seem to consider the computational time of computing the sparsity pattern (even for a random pattern).\\n\\nWhen n is big, the experimental results show a better and competitive performance to BPTT. However, in such cases (like Snap-3) the computational cost becomes very expensive compared to BPTT by at least 2 orders of magnitude, and the matrices become more dense. What are the results of TBPTT when stateful training is applied and how do they compare in such case? Also, the copy task has been solved with fewer neurons in previous works. \\n\\nThe Copy task details are unclear for someone that doesn\\u2019t know the work from Mujika et al., 2018. Please describe all the details. What is the length L used in the experiments? What is the length of the overall sequence? What does \\u201cdata time\\u201d mean in the plots of Figure 4?\\n\\n==================\\n\\nMy concerns behind the limited practicality of the method, and the limited experimental results given the hypothesis that the method can learn long-term temporal patterns. These are my considerations not to accept this paper.\\n\\n==================\", \"minor_issues\": \"-Use bigger fonts in the plots, and diagrams.\\n\\n-In 5.1, do you use SGD or Adam?\\n\\n-Figure 3: leave space between caption and figures\\n\\n-Table 1 caption: \\u201cBelow\\u201d -> \\u201cAbove\\u201d\\n\\n-It would be nice to mention the relationship between |\\\\theta| and k, for each recurrent cell case.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
DC1Im3MkGG | Exchanging Lessons Between Algorithmic Fairness and Domain Generalization | [
"Elliot Creager",
"Joern-Henrik Jacobsen",
"Richard Zemel"
] | Standard learning approaches are designed to perform well on average for the data distribution available at training time. Developing learning approaches that are not overly sensitive to the training distribution is central to research on domain- or out-of-distribution generalization, robust optimization and fairness. In this work we focus on links between research on domain generalization and algorithmic fairness---where performance under a distinct but related test distributions is studied---and show how the two fields can be mutually beneficial. While domain generalization methods typically rely on knowledge of disjoint "domains" or "environments", "sensitive" label information indicating which demographic groups are at risk of discrimination is often used in the fairness literature. Drawing inspiration from recent fairness approaches that improve worst-case performance without knowledge of sensitive groups, we propose a novel domain generalization method that handles the more realistic scenario where environment partitions are not provided. We then show theoretically and empirically how different partitioning schemes can lead to increased or decreased generalization performance, enabling us to outperform Invariant Risk Minimization with handcrafted environments in multiple cases. We also show how a re-interpretation of IRMv1 allows us for the first time to directly optimize a common fairness criterion, group-sufficiency, and thereby improve performance on a fair prediction task.
| [
"algorithmic fairness",
"domain generalization",
"representation learning",
"invariance"
] | Reject | https://openreview.net/pdf?id=DC1Im3MkGG | https://openreview.net/forum?id=DC1Im3MkGG | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"v7g49vfiBTP",
"4Cw5BwySCGT",
"kNrjeolYQy8",
"EXFuCgg-Uo",
"QhSpOixr95Z",
"CcO1K7h08cz",
"aqAVhT0nVGY",
"Y9bNxEEnp1R",
"00OGPI-NfFl",
"_rN8KFzJMmb",
"op6_KJK-yGd",
"A8vA0SBZUIV"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040389023,
1606198750694,
1606198685689,
1606198636623,
1606198456527,
1606198295913,
1606104667618,
1605195807011,
1604790672417,
1604027016673,
1603819560785,
1603580651005
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3730/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3730/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3730/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3730/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3730/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3730/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3730/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3730/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3730/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3730/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3730/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper analyzes connections between algorithmic fairness and domain generalization literatures. The reviewers found the paper interesting but they also raised some important concerns about it.\\n\\nThe applicability of the method presented in the paper is not clear nor well-discussed in the paper.\\n\\nThe papers and the revised version do not not cite important related work.\\n\\nThe mathematical exposition in the paper is a bit hard to read. Even after revision, the reviewers find part of the paper(Appendix F) very hard to read.\\n\\nOverall, the paper in the current version is below the high acceptance bar of ICLR.\"}",
"{\"title\": \"Main points addressed above in rebuttal\", \"comment\": \"Thanks for your helpful suggestions. In the main rebuttal we have addressed some of your suggestions regarding generalization properties of error, theoretical connections to fairness (through the invariance principle), and baselines.\"}",
"{\"title\": \"Typos fixed\", \"comment\": \"Thanks for your time in reviewing our work, and for your helpful feedback. We fixed the typos you pointed out in the revision.\"}",
"{\"title\": \"Following up on a few points\", \"comment\": [\"Thanks again for your time in reviewing our paper. Here are some responses to minor concerns from your original review:\", \"The connection from IRM to Liu et al is through the invariance principle. Liu et al focus on the group-sufficiency principle from the fairness literature, which as we point out in our paper, is equivalent to the invariance principle in domain generalization, discussed in the IRM paper. The regularizer in IRMv1 was previously proposed as a way of optimizing the invariance principle. We have included in the appendix of the revision a proof showing a condition under which optimizing our proposed softened version of the IRMv1 regularizer will optimize the invariance principle.\", \"The cost of EIILv1 is roughly triple that of IRM or ERM, since we need to first solve ERM for a reference classifier, then solve the inner loop of EIIL (there tend to be fewer per-example weights than network parameters in the settings we describe so this goes quickly), then finally solve IRM with the inferred environments. So we can think of implementing EIIL as roughly the training cost of implementing an ensemble of three networks (but same memory/inference cost as a single network), noting that ensembles tend to fail for the sort of dramatic test-time shifts that we study.\", \"We take care to clarify in the experiments that we have constructed the ConfoundedAdult dataset in order to measure out-of-distribution performance\", \"The domain generalization literature tends to work with discrete rather than continuous domains; because the general strategy of EIIL is to find worst-case environments w.r.t. a specific DG learning algorithm, the application to continuous domains is not obvious. In terms of scaling with increasing cardinality of a discrete domain set, the theoretical properties of IRM suggest that the more (statistically independent) environments the better in term of generalization guarantees. This suggests that extending EIIL to find more than two environments (with a term to promote diversity amongst inferred environments) may further help out-of-domain generalization. While this direction is left for future work, it is now discussed as a footnote in the revision.\"]}",
"{\"title\": \"Following up on particulars\", \"comment\": [\"Thanks for your helpful suggestions. Beyond the main points of our rebuttal above, here are several specific responses to your review\", \"We have updated the reference to include conference names where appropriate.\", \"About the plot colors, we used the seaborn \\u201ccolorblind\\u201d palette for the original submission, so we hope that it is already relatively colorblind friendly. We remain open to suggestions about how to further improve on accessibility of the plots.\", \"Thanks for pointing us to the soft partitions paper, which we now cite.\", \"We use the w=1.0 notation in the same way as Arjovsky et al. For multi-class classification it is equivalent to multiplying all dimensions of the representation by 1.0 (i.e. uniform diagonal loading) prior to the softmax.\", \"To your question about under what conditions solving the objective from eqn 3 satisfies the invariance principle, see the fourth bullet in our main response to all the authors.\"]}",
"{\"title\": \"Rebuttal\", \"comment\": [\"We would like to thank the reviewers for their detailed and extremely helpful reviews. We have updated the manuscript to incorporate these suggestions, Below we summarize how we addressed the common concerns here, and will follow up on a per-reviewer basis to address the remaining issues.\", \"Some reviewers were concerned that we only measured a win when evaluating EIIL in the high label noise regime or for certain choices of reference classifier. This highlights an important aspect of our paper -- our focus is on situations in which ERM performs poorly. This captures a wide range of problems in machine learning (see the Shortcuts paper [2]). We study the high label noise not because it is compelling on its own right, but rather because it serves as a controllable proxy for these cases, in which the ERM reference classifier is sub-optimal. Note that our interest in failure modes of ERM makes it challenging to derive formal guarantees about EIIL without introducing some assumptions over the ERM behavior (this is why we make such assumptions in our theorem).\", \"One reviewer suggested that hand-crafted environments should out-perform inferred worst-case environments, but an important point of our work is that this is not necessarily the case. As we mention in Section 3 (and show empirically in Section 4 and theoretically in Appendix B.2), even when hand-crafted environments are available they can sometimes be improved upon by EIIL. Even when environments/domains are known, they may be suboptimal from the perspective of learning an invariant representation. EIIL tends to find more dramatically different environments, which in turn helps IRM find a good global optimum by making the learning signal through the regularization term more informative.\", \"As the reviewers have pointed out, satisfying the invariance principle is the most important objective to establish a connection to fairness. But it leaves open the question of whether the specific regularizer used in IRMv1 is the best way to achieve the invariance principle (this is an open question in general for domain generalization). We provide new theoretical results in Appendix F showing that maximizing the soft/relaxed version of the IRMv1 regularizer using inferred environments (which is the goal of EIILv1) also maximally violates the invariance principle.\", \"In terms of theoretical guarantees related to generalization on held-out domains, we inherit all the generalization properties of IRM so long as the EIIL solution remains in the same degree of \\u2018linear general position\\u2019 (LGP) as the hand-crafted environments. When domains are Gaussian distributed, the LGP degree can be thought of as the inherent rank of the union of training domains mean vectors (the recent \\u2018Risks of IRM\\u2019 paper [1] does a good job of clarifying this in their appendix). So the EIIL solution can be expected to maintain the LGP degree of the hand crafted domains so long as its partitions do not induce two statistically identical environments. Anecdotally we can say that this does not happen in practice (the inferred environments tend to be quite distinct). Formally, we can appeal to theorem 10 of the IRM paper, stating that the set of covariance matrices that do not lie in LGP (assuming environments come from the linear SEM) is measure zero. Noting that covariances matrices under the EIIL-discovered environments q(X|e) can be expressed in terms of expected covariances under the SEM distributions p(X|e) multiplied by an importance weight q(X|e)/p(X|e), then so long as q(X|e)/p(X|e) is nonzero, the covariances remain positive semidefinite, theorem 10 still holds, and we have the same generalization properties as IRM.\", \"We add three new baselines for CMNIST experiment in Appendix E.2) following reviewer suggestions. ARL performs better than ERM but still worse than random chance on the test distribution (this is not surprising since distributionally robust methods like ARL are well-suited for smaller test time distribution shifts, not the more drastic intervention in CMNIST that reverses all color correlations). We also add alternating updates between the IRM and EIIL steps (i.e. alternating a single gradient step on the inferred environments update with a single gradient step on the representation update). Unfortunately this strategy, which tends to work well in other bi-level problems like GANs, does not seem to work effectively in our initial studies, again outperforming ERM but achieving below chance rates on the test set. Finally we try optimizing the inner and outer loop multiple times. This strategy induces an oscillation between the correct shape-based classifier and the incorrect color-based classifier, again highlighting the relevance of the reference classifier used by EIIL\", \"[1] Roesnfeld et al, The Risks of Invariant Risk Minimization, preprint, https://arxiv.org/abs/2010.05761\", \"[2] Geirhos et al, Shortcut Learning in Deep Neural Networks, https://arxiv.org/abs/2004.07780\"]}",
"{\"title\": \"REPAIR does not discuss the connections between domain generalization and algorithmic fairness\", \"comment\": \"Thanks very much for your detailed review. We are in process of revising the paper and submitting a more comprehensive rebuttal to address some of the concerns you brought up in the main review (which are helping us to make the paper stronger). In the meantime we wanted to briefly address this extra suggestion about the REPAIR paper. Thank you for pointing us to this work on example reweighting for addressing dataset bias, which we will cite in the revision. However we disagree with your assessment that there is overlap between this paper\\u2019s contributions to the literature and ours. The REPAIR method adaptively reweights the per-example contributions to the overall risk, and is not a domain generalization paper as you suggested (there is no notion of training under multiple domains/environment with the hopes of generalizing well to a held-out domain). It is better understood as a robust optimization paper. Also, the connection to algorithmic fairness offered by the REPAIR paper is rather cursory. For example, they do not establish the connection between example reweighting in their method and example reweighting in fairness methods that deploy distributionally robust optimization (e.g. Hashimoto ICML 2018).\\n\\nThanks again for your time in reviewing our work, and we hope the revision (to be posted shortly) will address the remainder of your suggestions.\"}",
"{\"title\": \"Missing key reference\", \"comment\": \"One more comment that I want to append to my original review:\\nThe idea of making connections between algorithmic fairness and domain generalization has been explored in the literature as early as REPAIR (https://arxiv.org/abs/1904.07911), which is missing as a reference in this paper. This will discount my favorable impression for making that connection, based on which I am lowering my score from 6 to 5. REPAIR also does experiments on Color MNIST (which is the main experimental setup in this paper).\\n\\nLi, Y. and Vasconcelos, N., 2019. REPAIR: Removing representation bias by dataset resampling. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 9572-9581).\"}",
"{\"title\": \"Insightful paper but weak results\", \"review\": \"This paper presents parallels between algorithmic fairness and domain generalization literatures. The authors explore a learning setup where the goal is to learn some representation $\\\\Phi(x)$ that is \\\"independent\\\" of some environmental variable $e$. The authors explore cases where $e$ is known or not and come up with some algorithms that draw connections between recent work on domain generalization, specifically invariant risk minimization (Arjovsky et al 2019) and fairness. The authors conclude the paper with three different examples addressing domain generalization and fairness. While the supporting experimental results are not very strong, the connections observed are interesting. I have several points for clarification that I detail below.\\n\\n**Exposition.** While the introduction of the paper is written nicely and the ideas are communicated nicely, the mathematical exposition and presentation in this paper is not self-contained. For example, it is close to impossible for a reader to follow this paper without having read (Arjovsky et al 2019) and (Liu et al. 2018). I suggest that the authors expand on the mathematical exposition, defining all terms, and explaining different things came from, especially since the paper is supposed to have a broad audience across two communities.\\n\\n**Convergence of Algorithm.** The proposed algorithm comes with no guarantees and I suspect it will not converge in a variety of situations, especially in cases where $C^{\\\\text{IRM}}$ is nonconvex in (EIIL). Why would you need to run the inner optimization and outer optimizations to convergence each time? Have you tried a GDA version of the algorithm?\\n\\n**Cost of the Algorithm.** The proposed algorithm is costly because IRM has to be solved multiple times before it converges. Can you please comment on the computational cost of the proposed algorithm as compared to ERM and other baselines in each experimental setup?\\n\\n**Limitations of generalization-first fairness.** This section is nicely written and much appreciated.\\n\\n**Choice of $\\\\Phi_{spurious}$.** The algorithm is sensitive to initialization choice of $\\\\phi_{spurious}$ as the authors also find with their Color MNIST experiments. In particular, there is a huge performance gap in Fig 1.b. for $\\\\theta_y \\\\in (0, 0.15)$ that needs to be addressed. On the other hand, the algorithm seems to work well in the severely overfitting regime where ERM can be thought of having learnt $\\\\Phi_{spurious},$ as also discussed by the authors in the second paragraph of Section 3.2. However, the real world is not so black and white and hence this poses a severe limitation. Can you please explain?\\n\\n**Connection with (Liu et al. 2018).** In the third paragraph of **Fairness** section in Page 4, the authors claim a connection between IRMv1 (btw, IRMv1 is misspelled as IMRv1 there) and (Liu et al. 2018). This connection is not obvious to me. Can the authors make it rigorous?\\n \\n**Continuous $e$.** Can you please comment on how this setup may generalize to continuous $e$? At least can you please comment on the scaling of the algorithm with the cardinality of the set of environments?\\n\\n**Theorem 1.** Unfortunately, Theorem 1 and entire Section 3.2 is only applicable to a severely unusual and overfitting case (similar to the Color MNIST example) where there is perfect correlation between the environment variable and the label. The fact that the algorithm works well in this situation is not surprising. The real world, however, is not black and white and the limitations of the proposed framework in real-world situations (similar to $\\\\theta_y \\\\in (0, 0.15)$ in Fig 1.b.) remain to be understood.\\n\\n**Confounded Adult Dataset.** Please make it clear that this is constructed by the authors. I only understood that when started looking for the details in Appendix.\\n\\n**Typos.** (1) In Eq. (2), $w\\\\circ\\\\Phi$ should be replaced with $w.\\\\Phi$. (2). second paragraph of **Fairness** paragraph on Page 4, the word attribute is missing after \\\"sensitive\\\".\\n\\nOverall, while the subject area of the paper is exciting, unfortunately, the execution (both empirical and theoretical) is weak. I tend to remain to vote for rejection with encouragement for a more thorough empirical and theoretical investigation of the problem.\\n\\n---post rebuttal---\\n\\nAfter reading the authors' response, the other reviews, and the revision to the paper, I find that my comments are not sufficiently addressed. The author did not even acknowledge the existence of the prior work, REPAIR, in the revised paper. The imprecise mathematical expressions are still in the paper despite feedback from multiple reviewers. From a practical point of view, the developed algorithm is not scalable as it requires to (almost) solve the inner maximization at each iteration (based on the rebuttal), and it only works in the significantly overfitting regime (the authors are yet to show its performance in a more interesting regime). From a theoretical point of view, the applicability of the theory is also extremely limited to the perfectly overfitting regime, which does not capture the real world. In addition, I agree with AnonReviewer4 that the proofs are inscrutable. I regret to say that despite the fact that the subject area of the paper is exciting, I am adjusting my score to 4 post rebuttal.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting connection but could be supported with more theoretical guarantees\", \"review\": \"The main contribution of the paper is to highlight the similarity between two active areas in ML namely \\\"domain generalization\\\" and \\\"fairness\\\". Further, the paper proposes an approach inspired by recent developments in the fairness literature for domain generalization. The high-level idea is that similarly to the way that fair algorithm are able to improve the worst-case accuracy of predictors across different groups without knowing the sensitive attributes, perhaps we can use these ideas to domain generalization when environment partitions are not known to the algorithm. In some sense, in both of these research areas the goal is to design robust algorithms. Similarly, the paper uses the idea from domain generalization to design fair algorithms w.r.t. a notion called \\\"group sufficiency\\\". The idea is to somehow infer the \\\"worst-case\\\" subgroup (i.e., the one that our algorithm has the worst accuracy on it) and then using a round of auditing improve the performance of the algorithm across all subgroups.\\n\\nThe authors have supported their approach with empirical evaluations. In particular, I find the result on CMNIST quite interesting where the new algorithm as opposed to the standard approach like ERM will not be fooled by the spurious feature and can infer the useful environment. \\n\\nWhile the paper has introduced (to best of my knowledge) a new concept, it seems that are many interesting questions that could show the applicability of the connection better are not yet answered (e.g., bi-level optimization EIIL). This could also help the paper to be supported with more provable guarantees. In general the paper is exploring a new connection between two areas and has shown its efficacy in practice and I believe it can lead to further works on this topic.\", \"minor_comments\": \"- define the notion of \\\"group sufficiency\\\" explicitly in the paper. I could not find the definition of the notion in words till in the caption of Figure 2 on page 8 and is formally defined on page 12!\\n-page 5: poorly. . Consider -> poorly. Consider\\n-page 6: generalizattion -> generalization\\n-page 7: graysacle ->grayscale\\n-page 14: exagerated -> exaggerated\\n-page 14: orginal -> original\\n-page 17: implicily -> implicitly\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Interesting approach but lacks strong theoretical backing\", \"review\": \"### POST-REVISIONS ###\\nThanks for the revisions made to the theoretical results. I still find parts of the discussion in Appendix F to be unclear. \\n\\nFirstly, how do you derive eq. (5) from eq. (4)? In eq. (4), the denominators \\\\sum_i q_i are independent of \\\"\\\\Phi(x_i)\\\", but in eq. (5), they have a dependence on \\\\Phi through z_{i,b}. I think the change in normalization important to show the invariance principle holds (as the invariance principle requires a conditioning on each value \\\\Phi takes), but am unable to follow your derivation.\\n\\nSecondly, I'm not convinced that the maximizing partition for eq (5) assigns all examples with y=1 to one group, and those with y=0 to another group. Wouldn't the maximizing partition also depend on what \\\\hat{y} evaluates to for those examples?\\n\\nOverall, I'm able to see what the authors are trying to get at with this example, but unfortunately the revisions aren't sufficient to address all of my concerns regarding the theoretical results.\\n********************\\n\\nThe paper presents an approach for training models that generalize well to out-of-distribution (OOD) samples, particularly when the source of domain shift (e.g. spurious correlations or sensitive groups) is not known before hand. The paper combines ideas from two prior papers from the domain generalization and fairness literature: (i) invariant risk minimization for OOD generalization (Arjovsky et al.) and (ii) adversarially reweighting for fairness without protected groups (Lahoti et al.). \\nAt a high level, the proposed approach seeks to minimizes the average classification loss across the worst-case partitioning of the dataset into two groups. Experimental results on datasets with synthetically generated spurious features show that the proposed approach is able to generalize better to OOD samples in the high noise regime, without having knowing aprior which features are spuriously correlated with the labels.\", \"pros\": [\"The question tackled is practically important: how one can generalize to OOD samples without knowing the exact source of discrepancy between train and test data.\", \"The experimental results look encouraging\"], \"cons\": [\"The paper lacks a clear theoretical motivation for the specific optimization objective that the authors end up using (eq 3). In particular, do we know (at least in some a simple setting) that maximizing this objective over soft-group memberships \\\"u_i\\\" will identify the partitioning of the data that maximally violates the Invariant Constraint? I elaborate on this next.\"], \"relaxed_training_objective_lacks_strong_theoretical_backing\": \"The authors directly adapt the training setup of Arjovsky et al., where the goal is to train a model which learns the same conditional label distribution for any given input \\\"x\\\" across a set of known partitioning of the training data, dubbed as the invariance constraint . Each of these partitions, referred to as 'environments', represent a different training distribution, and the goal is to train a model that performs equally well across all of them. Arjovsky et al. show that for the special case of linear invariant predictors, the training problem can be relaxed into an unconstrained objective with a regularization penalty. \\n\\nThe present paper extends the setup of Arjovsky et al. to problems where the environments are not a prior known, and seeks to minimize the average classification loss over a partitioning of the data that maximally violates the invariance constraint. However, they do not explicitly solve this optimization problem, and instead simply minimize the worst-case value of the \\\"relaxed training objective\\\" of Arjovsky et al. over all (soft) partitioning of the data.\\n\\nIs the relaxation that Arjovsky et al. employ with known environments still relevant to your problem formulation, where you would like the invariance constraint to hold for all possible partitioning of the data?\\n\\nAt the very least, this requires a discussion. Ideally, it would be nice to see a derivation of the relaxation for some simple special cases: e.g. like Arjovsky et al., can you show that for linear predictors, \\\"finding a partition that maximally violates the invariant constraint\\\" is equivalent to \\\"maximizing the relaxed unconstrained objective in eq. 3 over partitions\\\"?\", \"other_comments\": [\"Eq 3: I think \\\"w\\\" is a scalar here (otherwise evaluating the gradient at w = 1.0 doesn't make sense). Please make that explicit and also provide some intuition for why this regularization penalty with a scalar \\\"w\\\" makes sense for your problem set up.\", \"I am not entirely sold on the general theme of this paper of exchanging lessons between fairness and domain generalization. The authors are definitely correct in crediting a prior fairness paper for the idea of the idea of adversarially re-weighting examples with a soft groups model, but as they themselves point out this idea has existed in different forms in the domain generalization literature (e.g. DRO). So my reading is that the paper seems to slightly over-emphasize the connection to the fairness literature, but this is a personal take. Having said this, the paper does provide (in Sec 2) a nice literature overview of similar problems tackled by the domain generalization and fairness communities.\", \"In the color MNIST experiments, you observe \\\"IRM(eEIIL) generalizes better than IRM(eHC) with sufficiently high label noise\\\". If I understand correctly, IRM(eHC) has access to the true environments, whereas IRM(eEIIL) uses environments inferred from data. Wouldn't we expect the former method to have an advantage over the latter?\", \"Additional baseline: Would it make sense to compare with (a form of) DRO for the color MNIST task (e.g. ones cited in Table 2)? You do mention in another experiment that Lahoti et al. compare with DRO for their particular fairness application, but do those observations also apply yo the tasks you consider in this paper.\", \"Iterative training: I think a natural extension of your approach (which you've probably already thought about) is to solve (EIIL) using an iterative technique that alternates between maximizing over \\\"q\\\" and e.g. performing gradient descent updates on \\\"\\\\Phi\\\". Iteratively performing full optimizations over both sets of parameters may not in general have good convergence properties.\", \"Might be a relevant citation for the use of soft partition assignments for fairness: https://arxiv.org/pdf/2002.09343.pdf\", \"Fig 1: Would be nice if the plots were color blind friendly :)\", \"References: Might be good to mention the conference venue wherever available: e.g. Hashimoto et al. appeared in ICML 2018.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"ICLR 2021 Conference Paper3730 AnonReviewer2\", \"review\": \"Summary:\\nThis paper studies the connections between algorithmic fairness and domain generalization. As discussed in Section 2, the \\u201cenvironment\\u201d in domain generalization plays a similar role as the \\u201cgroup membership\\u201d in algorithmic fairness. The paper shows in Table 2 that the methods of each field can apply to the other field. \\n\\nThe paper develops its own algorithm EIIL which extends the Invariant Risk Minimization (IRM) of domain generalization to work in the situation when the prior knowledge of environments is not available. And this extension is mainly based on the idea from algorithmic fairness literature which considers the worst-case environments and solves a bi-level optimization. \\n\\nThe paper shows empirically that their algorithm EIIL outperforms IRM with handcrafted environments in terms of test accuracy on CMNIST.\", \"strength\": \"(1) The connection between domain generalization and algorithmic fairness shown by the paper is interesting.\\n(2) The paper demonstrates the performance of EIIL via empirical results.\", \"weakness\": \"(1) Other than the high level intuitions and examples, the paper does not provide any theoretical analysis of the performance of the EIIL for domain generalization. What guarantees can EIIL get in terms of the test error and how does it compare to IRM (when making reasonable assumptions about the training and test distributions)?\\n(2) Similarly, the paper does not provide any theoretical analysis of EIIL for algorithmic fairness.\\n(3) On top of page 6, after explaining the bi-level optimization, the paper switches to the sequential approach (EIILv1) without much explanation. Why is the bi-level optimization not practical? How well can the proposed sequential approach approximate the bi-level optimization results and how does this affect the performance of EIILv1?\", \"reasons_for_score\": \"Overall I vote for rejection since the weakness outweighs the strength. The lack of theoretical analysis of the algorithm makes the paper incomplete.\", \"typo\": \"\", \"page_5\": \"two periods after word \\u201cpoorly\\u201d.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
kdm4Lm9rgB | Monotonic Robust Policy Optimization with Model Discrepancy | [
"Yuankun Jiang",
"Chenglin Li",
"Junni Zou",
"Wenrui Dai",
"Hongkai Xiong"
] | State-of-the-art deep reinforcement learning (DRL) algorithms tend to overfit in some specific environments due to the lack of data diversity in training. To mitigate the model discrepancy between training and target (testing) environments, domain randomization (DR) can generate plenty of environments with a sufficient diversity by randomly sampling environment parameters in simulator. Though standard DR using a uniform distribution improves the average performance on the whole range of environments, the worst-case environment is usually neglected without any performance guarantee. Since the average and worst-case performance are equally important for the generalization in RL, in this paper, we propose a policy optimization approach for concurrently improving the policy's performance in the average case (i.e., over all possible environments) and the worst-case environment. We theoretically derive a lower bound for the worst-case performance of a given policy over all environments. Guided by this lower bound, we formulate an optimization problem which aims to optimize the policy and sampling distribution together, such that the constrained expected performance of all environments is maximized. We prove that the worst-case performance is monotonically improved by iteratively solving this optimization problem. Based on the proposed lower bound, we develop a practical algorithm, named monotonic robust policy optimization (MRPO), and validate MRPO on several robot control tasks. By modifying the environment parameters in simulation, we obtain environments for the same task but with different transition dynamics for training and testing. We demonstrate that MRPO can improve both the average and worst-case performance in the training environments, and facilitate the learned policy with a better generalization capability in unseen testing environments. | [
"Reinforcement Learning",
"generalization"
] | Reject | https://openreview.net/pdf?id=kdm4Lm9rgB | https://openreview.net/forum?id=kdm4Lm9rgB | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"nzmRo__rhpb",
"vDp6q9EPwK2",
"3Xvij8RNA9K",
"YuboIfEDEgi",
"qDmrEva94lo",
"xDWUPKnKNVM",
"756hpFO4utv",
"95Q-RhIDUCX",
"IUrCSVMQ0A0",
"9HGj9QSmABI",
"vBKfbHv0rFB",
"_0uPUvHZK9",
"4VwK6O_grWf",
"JJfulIKMmvN",
"prRsijY1qA",
"BnlbUBTtMPs",
"hVyquA5GeqP",
"RLBJHEg3WlB",
"edlYnHc1j4a"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040350906,
1606303893864,
1606301362509,
1606301277080,
1606301004187,
1606300868995,
1606300624710,
1606300398885,
1606299974130,
1606299859778,
1606299732824,
1606299500264,
1606299418033,
1606299282872,
1606298589187,
1604470499646,
1603909642622,
1603885851690,
1603856614174
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3728/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3728/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3728/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3728/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3728/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3728/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3728/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3728/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3728/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3728/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3728/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3728/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3728/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3728/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3728/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3728/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3728/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3728/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper tackles the problem of mitigating the effect of model discrepancies between the learning and deployment environments. In particular, the author focus on the worst-case possible performance. The paper has both an empirical and theoretical flavor. The algorithm they derived is backed by theoretical guarantees. There exists a gap between the theory presented and the final practical algorithm, which generated some elements of concern from the reviewers. Some of these issues (choice and sensitivity of the Lipschitz constant, in what cases can we make that assumption, choice of p_w, discrepancy between the theoretical proposal and the practical algorithm) are well addressed in the rebuttal. However, after careful examination of the reviews, the meta-reviewer is still not convinced that the paper meets the minimum requirements for acceptance, as many of the reviewers' initial concerns still remain.\"}",
"{\"title\": \"Overall Author Response to All the Anonymous Reviewers\", \"comment\": \"We would like to thank all the anonymous reviewers for their constructive comments to help us improve this paper. We have carefully considered all of them in the rebuttal version of our paper, where our changes are highlighted in blue.\\n\\nHere, we would like to summarize the shared concerns from reviewers and the corresponding major revisions that we have made in the rebuttal version. For the detailed explanation, please also refer to our responses to each reviewer's comments.\\n\\n$\\\\\\\\\\\\\\\\$\\n\\n(C-1) Concerns on Empirical Evaluations\\n\\n R-1) Evaluation on HalfCheetah: In Fig. 1(f) of the original submission, it was seen that MRPO did not outperform DR. We hypothesized and thought that this was because the original parameter range we set for the Halfcheetah task (e.g., the friction range of $[0.5, 1.1]$) was too narrow to cause seriously poor performance on the $10\\\\%$ worst-case environments. In the rebuttal version, we have validate this hypothesis and reported new experiment result of the Halfcheetah task by enlarging the friction range from $[0.5, 1.1]$ to $[0.2, 2.5]$, which was denoted as HalfcheetahBroadRange. The training curves of 10\\\\% worst-case return have been shown in Fig. 1(l), which can demonstrate that MRPO outperforms the other baselines.\\n\\n R-2) More evaluation benchmarks: We have evaluated MRPO on more mujoco benchmarks like InvertedDoublePendulum, and also on classical control task like Cartpole. In addition, we have enlarged the friction range from the original $[0.5, 1.1]$ to $[0.2, 2.5]$ to form a new setting, denoted as HalfcheetahBroadRange. \\n\\n R-3) Evaluation on unseen environments for other benchmarks: We have also shown the comparison results on unseen environments for other benchmarks (e.g., Walker2D, HalfCheetahBroadRange, InvertedDoublePendulum and Cartpole), to provide empirical support for the generalization capability of MRPO.\\n\\n$\\\\\\\\\\\\\\\\$\\n\\n(C-2) Concerns on Lipschitz Assumption and Hyperparameter $\\\\kappa$'s Tuning\\n\\n R-4\\uff09We have added Appendix A.9 to analyze the Lipschtz assumption, and Appendix A.10 to study the hayperparameter $\\\\kappa$.\\n\\n$\\\\\\\\\\\\\\\\$\\n\\n(C-3) Concerns on Monte Carlo Sampling and Estimation \\n\\n R-5) We have added Appendix A.8 to analyze the Monte Carlo Estimation of $\\\\eta(\\\\pi\\\\vert p)$, and the impact of number of sampled trajectories $L$ both theoretically and empirically.\\n\\n$\\\\\\\\\\\\\\\\$\\n\\n(C4) Concerns on Selection of the Worst-case environment $p_w$\\n\\n R-6) We have modified Algorithms 1 and 2 to clarify how to select the worst-case environment $p_w$.\\n\\n$\\\\\\\\\\\\\\\\$\\n\\n(C5) Concerns on Bounded Reward Function Condition in Theorem 1\\n\\n R-7) In Theorem 1 of the rebuttal version, we have stated this bounded reward function condition. And in Appendix A.7, we have also listed the reward functions of the five robot control tasks evaluated in this paper to support this condition.\\n\\n$\\\\\\\\\\\\\\\\$\\n\\n(C6) Concerns on Assumption of Similar Worst-case Performance Between Two Iterations\\n \\n R-8) In Appendix A.5, we have added new empirical evaluation of MRPO on Hopper to validate this assumption.\"}",
"{\"title\": \"Author Response to Concern on Experiments\", \"comment\": \"Comment 3: \\\"The experiments are slightly inadequate, the effects of tunable hyperparameter $\\\\kappa$ should be further analyzed; In unseen environment, the MRPO algorithm is only tested on one environment.\\\"\", \"response\": \"We would like to thank the reviewer for this suggestion. In Section 4, Appendix A.10 and Appendix A.11 of the rebuttal version, we have added more empirical evaluations in the following three aspects.\\n\\n1) Analysis of hyperparameter $\\\\kappa$: In Appendix A.10 of the rebuttal version, we have added the following theoretical analysis and empirical evaluation on the hyperparameter $\\\\kappa$.\\n\\n Theoretically, in Algorithm 2, $\\\\kappa$ is a hyperparameter that controls the trade-off between the expected cumulative discounted reward $\\\\eta(\\\\pi_k|p_i) $ and distance $\\\\Vert p_i - p^k_w \\\\Vert$ to the worst-case environment. A larger $\\\\kappa$ means that the policy cares more about the poorly-performing environments, while a smaller $\\\\kappa$ would par more attention to the average performance. As empirical evaluation, we conduct experiment of MRPO on Hopper with different choices of hyperparameter $\\\\kappa$. The training curves of both average return and the 10\\\\% worst-case return are shown in Figs. 5(a) and 5(b) of the rebuttal version, respectively. It can be verified that for the fixed value choice of $\\\\kappa$, the curve of $\\\\kappa=5$ outperforms the curves of $\\\\kappa=20, 40, 60$ in terms of the average return in Fig. 5(a), while the curve of $\\\\kappa=60$ outperforms the curves of $\\\\kappa=5, 20, 40$ in terms of the 10\\\\% worst-case return in Fig. 5(b). In practical implementation, we gradually increase $\\\\kappa$ to a fixed high value. It can therefore strike a tradeoff between the average return and 10\\\\% worst-case return, demonstrating the best performance both in Figs. 5(a) and 5(b) of the rebuttal version.\\n\\n2) Evaluations on other benchmarks: In Section 4 of the rebuttal version, we have evaluated MRPO on more mujoco benchmarks like InvertedDoublePendulum, and also on classical control task like Cartpole. In addition, we have enlarged the friction range from the original $[0.5, 1.1]$ to $[0.2, 2.5]$ to form a new setting, HalfcheetahBroadRange. \\n\\n3) More evaluations on unseen environments: In Section 4 and Appendix A.11 of the rebuttal version, we have also shown the comparison results on unseen environments for other benchmarks (e.g., Walker2D, HalfCheetahBroadRange, InvertedDoublePendulum and Cartpole), to provide empirical support for the generalization capability of MRPO.\"}",
"{\"title\": \"Author Response to Assumption of Similar Worst-case Performance Between Two Iterations\", \"comment\": \"Comment 2: \\\"About the monotonic worst-case performance improvement theorem, the proof says '... the approximation is made under the assumption that the worst-case environment between two iterations are similar, which stems from the trust region constraint we impose on the update step between current and new policies ...', however, the trust region constraint can only limit the difference between policy updates, the similarity between worst-case environments can not be promised.\\\"\\n\\n\\\"In theorem 2, the formula (50) and (51) in the proof, is this approximation reasonable? Since the policy is updated, the worst-case environment may have changed a lot. Similarly, if the updated policy changes very little, can we make $\\\\pi_{new} = \\\\pi_{old}$?\\\"\", \"response\": \"We would like to thank the reviewer for pointing out this issue. In Appendix A.5, Fig. 3 of the rebuttal version, we have added new empirical evaluation of MRPO on Hopper to validate this assumption, as follows.\\n\\nTo verify the assumption made in Theorem 2, in Fig. 3, we study how the parameters of environments with poor performance scatter in the parameter space with different dimensions. Specifically, we plot the heatmap of return for the range of Hopper environments used for training, achieved by using MRPO to update the policy between two iterations. It can be validated that at the iteration $k=300$, the poorly performing environments of the two policies before and after the MRPO update concentrate in the same region, i.e., the area of small frictions. The same result can be observed for the iteration $k=350$.\\n\\nFor example, as shown in Figs. 3(a) and 3(b), at iteration $k=300$, $p_w^{300} = (750, 0.5)$, the MC estimation of $\\\\eta(\\\\pi_{300}\\\\vert p_w^{300})$ is $487.6$ and that of $\\\\eta(\\\\pi_{301}\\\\vert p_w^{300})$ is $532.0$. At iteration $k=301$, $p_w^{301} = (1027.8,0.5)$ and the MC estimation of $\\\\eta(\\\\pi_{301}\\\\vert p_w^{301})$ is $517.6$. As shown in Figs. 3(c) and 3(d), at iteration $k=350$, $p_w^{350} = (861.1,0.5)$, the MC estimation of $\\\\eta(\\\\pi_{350}\\\\vert p_w^{350})$ is $385.9$ and that of $\\\\eta(\\\\pi_{351}\\\\vert p_w^{350})$ is $422.2$. At iteration $k=351$, $p_w^{351} = (750,0.5)$ and the MC estimation of $\\\\eta(\\\\pi_{351}\\\\vert p_w^{351})$ is $394.0$. In both cases, the empirical results can support the assumption that we made in Equation (52), i.e., the expected returns of worst-case environment between two iterations are similar.\\n\\nPlease note that we have slightly modified the proof of Theorem 2 in Appendix A.4 to be consistent with the above empirical verification of assumption in Theorem 2.\"}",
"{\"title\": \"Author Response to Bounded Reward Function Condition\", \"comment\": \"First of all, would like to thank the reviewer for providing the detailed comments. Please see below our detailed responses to these comments, and corresponding revisions in the rebuttal version of our paper.\\n\\n$\\\\\\\\\\\\\\\\$\", \"comment_1\": \"\\\"For Lemma 1: The conclusion is based on the assumption that the worst case $\\\\rho(\\\\pi|p_w) - \\\\max_p \\\\rho(\\\\pi|\\\\rho)$ is bounded (Proof A.1). However, such equation does not strictly holds without bounded reward function. The author should stated the condition.\\\"\", \"response\": \"1) We would like to thank the reviewer for pointing out bounded reward function condition. In Theorem 1 of the rebuttal version, we have stated this bounded reward function condition.\\n\\n2) In Appendix A.7 of the rebuttal version, we have also listed the reward functions of the five robot control tasks evaluated in this paper to support this condition, as follows.\\n\\nReferring to the source code of OpenAI gym, the reward function for the five robot control tasks evaluated in this paper are listed below.\", \"hopper_and_walker2d\": \"\\\\begin{align*}\\n R = x_{t+1} - x_t + b - 0.001\\\\vert a_t \\\\vert^2;\\n\\\\end{align*}\", \"halfcheetah\": \"\\\\begin{align*}\\n R = x_{t+1} - x_t - 0.001\\\\vert a_t \\\\vert^2;\\n\\\\end{align*}\", \"cartpole\": \"\\\\begin{align*}\\n R = 1, \\\\quad \\\\text{if the pole does not fall down};\\n\\\\end{align*}\", \"inverteddoublependulum\": \"\\\\begin{align*}\\n R = b - c_{dist} - c_{vel}.\\n\\\\end{align*}\\nIn Hopper, Walker2d and Halfcheetah, $x_{t+1}$ and $x_{t}$ denote the positions of the robot at timestep $t+1$ and $t$, respectively. For Hopper and Walker2d, $b\\\\in \\\\{0,1\\\\}$, and $b$ equals $0$ when the robot falls down or $1$ otherwise. The squared norm of action represents the energy cost of the system. Since the maximum distance that the robot can move in one timestep and the energy cost by taking an action at each timestep are bounded, these three tasks all have the bounded reward function. In Cartpole, the reward is always $1$. In InvertedDoublePendulum, $b$ equals $0$ when the pendulum falls down or $10$ otherwise, $c_{dist}$ is the distance between the robot and the centre, and $c_{vel}$ is the weighted sum of the two pendulum's angular velocities. Since all the three parameters $b$, $c_{dist}$ and $c_{vel}$ are physically bounded, the reward function, as a linear combination of them, is also bounded.\"}",
"{\"title\": \"Author Response to Other Concerns on Typo, Notation, and Free-to-Use Environments\", \"comment\": \"Comment 2: \\\"Minor remarks: p3 detalis -$<$ details\\\"\", \"response\": \"1) Please note that the environments used in our experiments on were all implemented based on Roboschool, which is an open-source software and free-to-use. The link for accessing Roboschool is https://openai.com/blog/roboschool/. \\n\\n2) According the the reviewer's suggestion, we have also evaluated the proposed MRPO algorithm in Cartpole, which is an open-source classical control task.\", \"comment_3\": \"\\\"I found the $\\\\rho$ notation for cumulative reward a bit confusing especially when $p$ is involved in the equations, maybe a $\\\\nu$ instead would improve readability?\\\"\", \"comment_4\": \"\\\" Experiments on non-free systems like Mujoco are not easily reproducible. A few experiments on free-to-use environments would improve the reproducibility of the paper.\\\"\"}",
"{\"title\": \"Author Response to Concern on Application of Generic CVaR Algorithm\", \"comment\": \"First of all, we would like to thank the reviewer for providing the detailed comments. Please see below our detailed responses to these comments, and corresponding revisions in the rebuttal version of our paper.\\n\\n$\\\\\\\\\\\\\\\\$\", \"comment_1\": \"\\\"This domain randomization model formally equivalent to a single (continuous) MDP where the the environment's dynamic is parametrized by the initial state distribution (for instance by enriching the MDP states by the p parameter). It is therefore unclear to me that a specific algorithm is required for the specific case of parametrized MDPs. What would be the performance of a generic CVaR algorithm like \\\"Risk-constrained reinforcement learning with percentile risk criteria\\\" (Chow et al. 2017) on this setting? I found the idea of diverting the TRPO approximation bound into a safety bound appealing. Applied to a single MDP it could lead to a CVaR variant of TRPO.\\\"\", \"response\": \"Please note that as a representative of robust RL algorithms, EPOpt in (Rajeswaran et al., 2017) was in essence a generic CVaR algorithm in the parameterized MDP case, which aims to maximize the conditional value at risk, i.e., the expected reward over the subset of environments with the lowest expected reward. \\n\\nSpecifically, the optimization problem that EPOpt aims to solve is as follows:\\n\\\\begin{align*}\\n \\\\max_{\\\\theta, y} \\\\int_{\\\\mathcal{F}(\\\\theta)} \\\\eta(\\\\pi_{\\\\theta}\\\\vert p) P(p)dp \\\\quad s.t.\\\\quad Pr(\\\\eta(\\\\pi_{\\\\theta} \\\\vert p)\\\\leq y) = \\\\epsilon,\\n\\\\end{align*}\\nwhere $\\\\mathcal{F}(\\\\theta) = \\\\{ p\\\\vert \\\\eta(\\\\pi_{\\\\theta}\\\\vert p) \\\\leq y \\\\}$ is the set of environment parameters that produce the worst $\\\\epsilon$ percentile of expected returns, and $y$ is the $\\\\epsilon$-quantile of expected return $\\\\eta$. It can be seen that this optimization problem can be viewed as the CVaR optimization under the parametrized MDP. For practical implementation, EPOpt proposed to optimize the policy on the subset of trajectories from the worst $\\\\epsilon$ percentile environments, which was essentially an approximation solution to CVaR problem under parametrized MDP.\\n\\nIn this paper, the baseline PW-DR was the practical implementation of EPOpt algorithm. Through performance evaluation on five different robot control tasks, we could see that compared to PW-DR, the proposed MRPO improved both the average and worst-case performance in the training environment, and achieved a better generalization performance in the unseen environments.\", \"reference\": \"Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, and Sergey Levine. EPOpt: Learning robust neural network policies using model ensembles. 2017.\"}",
"{\"title\": \"Author Response to Unseen Environment Results for Other Tasks, Clarification in Theorem 1, and Selection of the Worst-case Parameter\", \"comment\": \"Comment 4: \\\"For the experiments on generalization to unseen environments, only the results for Hopper is provided, which may not be sufficient to demonstrate the behavior of each algorithm. It would be great to provide the heatmap results for other domains, i.e. Walker and HalfCheetah.\\\"\", \"response\": \"1) Please note that we have modified Algorithm 2 according to Reviewer 2's comments, to clarify how to determine the worst-case environment $p_w$. In the modified version of Algorithm 2, we sampled $L$ trajectories for each environment in Line 5. Then, by using Monte Carlo estimation, we determined $p_w$ based on the mean of the cumulative discounted reward of these $L$ sampled trajectories (i.e., $\\\\sum_{j=0}^{L-1}G(\\\\tau_{i,j}\\\\vert p_i)/L$) in Line 6.\\n\\n2) In Theorem 1, the worst-case environment parameter $p_w$ needs to be selected according to the expected cumulative discounted reward $\\\\eta(\\\\pi_k \\\\vert p)$ of each environment $p$, which is infeasible to get in the practical implementation. Therefore, as a commonly used alternative approach as in (Rajeswaran et al., 2017), we used Monte Carlo sampling of $\\\\sum_{j=0}^{L-1}G(\\\\tau_{i,j}\\\\vert p_i)/L$ to estimate the expectation $\\\\eta(\\\\pi\\\\vert p_i)=E_{\\\\tau}\\\\left[G(\\\\tau\\\\vert p_i) \\\\right]$, where we samples $L$ trajectories $\\\\{\\\\tau_{i,j}\\\\}_{j=0}^{L-1}$ . In Appendix A.8 of the rebuttal version, we have analyzed the Monte Carlo Estimation, and the impact of number of sampled trajectories $L$ both theoretically and empirically.\", \"comment_5\": \"\\\"In Theorem 1, is $p_w$ is the worst-case parameter for $\\\\pi$? or for $\\\\tilde{\\\\pi}$? It would be good if notation presents the dependence on the policy of $p_w$, e.g. $p_w^\\\\pi$.\\\"\", \"comment_6\": \"\\\"In Algorithm 2, line 6: how can $p_w$ be found? (even before completing sampling the trajectories for each environment)\\\"\", \"reference\": \"Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, and Sergey Levine. EPOpt: Learning robust neural network policies using model ensembles. 2017.\"}",
"{\"title\": \"Author Response to the Recurrent Policy\", \"comment\": \"Comment 3: \\\"It seems that two dense layers are used to construct the policy and value networks in the experiments. Why was the recurrent (e.g. LSTM) policy not used? Since the recurrent policy can implicitly embed system identification, I think the performance of the DR baseline could have been improved with the use of the recurrent policy. It would be great to see the performance comparison when the recurrent policy is used for MRPO and baselines.\\\"\", \"response\": \"Packer et al. (2018) have conducted extensive performance comparison when two network architectures were used for policy and value functions for many different baselines, including PPO and EPOpt-PPO. i) The first network architecture was the feed-forward (FF) architecture of multi-layer perceptrons (MLP) with two hidden layers of 64 units each. ii) The second was the recurrent (RC) architecture. In the RC architecture, the policy and value functions were the outputs of two separate fully-connected layers on top of a one-hidden-layer RNN with LSTM cells of 256 units, and the RNN itself was on top of an MLP with two hidden layers of 256 units each. Their experiments (please refer to Table 2 in (Packer et al., 2018) for more detail) showed that for the same baseline, the utilization of the second RC architecture would significantly degrade the generalization performance in all the cases, as compared to using the first FF architecture. Therefore, in this paper, we adopted the first feed-forward network architecture with two hidden layers of 64 units each to construct the policy and value functions of MRPO and the baselines.\", \"reference\": \"Charles Packer, Katelyn Gao, Jernej Kos, Philipp Krahenbuhl, Vladlen Koltun, and Dawn Song. Assessing generalization in deep reinforcement learning. arXiv preprint arXiv:1810.12282, 2018.\"}",
"{\"title\": \"Author Response to Concern on Experimental Results\", \"comment\": \"Comment 2: \\\"Some of the experimental results are not convincing. For example, in Figure 1f, MRPO underperforms DR, even if DR does not consider the worst-case performance during optimization at all.\\\"\\n\\n\\\"In Figure 1, what does the shaded-area stand for? standard deviation? standard error? it is not clear that MRPO outperforms other baselines statistically significantly.\\\"\", \"response\": \"1) Evaluation on HalfCheetah: In Fig. 1(f) of the original submission, it was seen that MRPO did not outperform DR. We hypothesized and thought that this was because the original parameter range we set for the Halfcheetah task (e.g., the friction range of $[0.5, 1.1]$) was too narrow to cause seriously poor performance on the $10\\\\%$ worst-case environments. In the rebuttal version, we have validate this hypothesis and reported new experiment result of the Halfcheetah task by enlarging the friction range from $[0.5, 1.1]$ to $[0.2, 2.5]$, which was denoted as HalfcheetahBroadRange. The training curves of 10\\\\% worst-case return have been shown in Fig. 1(l), which can demonstrate that MRPO outperforms the other baselines. \\n\\n\\n2) Explanation on the shaded area in Figure 1: In training, we run each algorithm on all the environments for five different random seeds. In Figure 1, the solid curve was used to represent the average performance of each algorithm on all the five seeds, while the shaded-area denoted the standard error of the algorithms' performance on all the five seeds. In Section 4.1 of the rebuttal version, we have added corresponding clarification for the shaded-area.\\n\\n3) More Evaluations on Cartpole and InvertedDoublePendulum: In Fig. 1 and Table 2 of the rebuttal version, we have also shown empirical evaluation for two new robot control tasks, Cartpole and InvertedDoublePendulum. The newly reported results could also validate that MRPO generally outperforms the other baselines in the Cartpole and InvertedDoublePendulum tasks.\"}",
"{\"title\": \"Author Response to Lipschitz Assumption\", \"comment\": \"First of all, we would like to thank the reviewer for providing the detailed comments. Please see below our detailed responses to these comments, and corresponding revisions in the rebuttal version of our paper.\\n\\n$\\\\\\\\\\\\\\\\$\", \"comment_1\": \"\\\"The assumption that the transition dynamics model is L-Lipschitz with respect to the environment parameter seems to be strong.\\\"\\n\\n\\\"How natural is the model's Lipschitz assumption? Are many real-world problems satisfying this assumption?\\\"\", \"response\": \"1) Reason to make the Lipshitz assumption: In robot control tasks, classical optimal control methods commonly utilize the differential equation to formulate the dynamic model, which then indicates that the transition dynamics model is $L_p$-Lipschitz and this formulated dynamic function can be used to estimate the Lipschitz constant $L_p$.\\n\\n For example, the inverted double pendulum, one of our newly added test environments, can be viewed as a two-link pendulum system (Chang et al.,2019). To simplify the analysis, we illustrate here a single inverted pendulum, which is the basic unit that forms the inverted double pendulum system. The single inverted pendulum has two state variables $\\\\theta$ and $\\\\dot{\\\\theta}$, and one control input $u$, where $\\\\theta$ and $\\\\dot{\\\\theta}$ represent the angular position from the inverted position and the angular velocity, respectively, and $u$ is the torque. The system dynamics can therefore be described as\\n\\\\begin{align}\\n \\\\ddot{\\\\theta} = \\\\frac{mgl \\\\sin{\\\\theta} + u -0.1\\\\dot{\\\\theta}}{m l^2},\\n\\\\end{align}\\nwhere $m$ is the mass, $g$ is the Gravitational acceleration, and $l$ is the length of pendulum. In our setting, we may choose $m$ as the variable environment parameter $p$. Since the above system dynamics are differentiable w.r.t. $m$, it can be verified that the maximum value of the first derivative of the system dynamic model can be chosen as the Lipschitz constant $L_p$.\", \"reference\": \"Chang, Ya-Chien, Nima Roohi, and Sicun Gao. \\\"Neural Lyapunov control.\\\" Advances in Neural Information Processing Systems. 2019.\\n\\n2) Relation between the Lipschitz constant and the hyperparameter $\\\\kappa$: From (3), it can be seen that the second term of the bound provided in Theorem 1 is not only dependent on the expected distance $\\\\epsilon(p_w \\\\Vert p)$, but also on $\\\\frac{2|r|_{\\\\max}\\\\gamma}{(1-\\\\gamma)^2}$. Therefore, in the practical implementation (Algorithm 2, Line 7), the Lipschitz constant was integrated into the hyperparameter $\\\\kappa$ which was the tunable hyperparameter during the experiment. \\n\\n Theoretically, in Algorithm 2, $\\\\kappa$ is a hyperparameter that controls the trade-off between the expected cumulative discounted reward $\\\\eta(\\\\pi_k|p_i) $ and distance $\\\\Vert p_i - p^k_w \\\\Vert$ to the worst-case environment. A larger $\\\\kappa$ means that the policy cares more about the poorly-performing environments, while a smaller $\\\\kappa$ would par more attention to the average performance. As empirical evaluation, we conduct experiment of MRPO on Hopper with different choices of hyperparameter $\\\\kappa$. The training curves of both average return and the 10\\\\% worst-case return are shown in Figs. 5(a) and 5(b) of the rebuttal version, respectively. It can be verified that for the fixed value choice of $\\\\kappa$, the curve of $\\\\kappa=5$ outperforms the curves of $\\\\kappa=20, 40, 60$ in terms of the average return in Fig. 5(a), while the curve of $\\\\kappa=60$ outperforms the curves of $\\\\kappa=5, 20, 40$ in terms of the 10\\\\% worst-case return in Fig. 5(b). In practical implementation, we gradually increase $\\\\kappa$ to a fixed high value. It can therefore strike a tradeoff between the average return and 10\\\\% worst-case return, demonstrating the best performance both in Figs. 5(a) and 5(b) of the rebuttal version.\\n\\n3) Revision in the rebuttal version: We have added Appendix A.9 to analyze the Lipschtz assumption, and Appendix A.10 to study the hayperparameter $\\\\kappa$.\"}",
"{\"title\": \"Author Response to Concern on Empirical Evaluation\", \"comment\": \"Comment 4: \\\"Finally the evaluation of the algorithms have not been conducted thoroughly.\\\"\\n\\n\\\"The experiments do not give strong empirical support for the new algorithm. The authors only evaluate on three environments, which I think is not enough, can the authors add more mujoco benchmarks? Also from the current results, I can not conclude that MRPO is better than PPO-DR since the evaluated domain is only three. Further, can the authors run more iterations to make sure the algorithms converge? The curves now presented in the paper did not converge.\\\"\", \"response\": \"We would like to thank the reviewer for this suggestion. In Section 4 and Appendix A.11 of the rebuttal version, we have added more empirical evaluations in the following three aspects.\\n\\n1) We have evaluated MRPO on more mujoco benchmarks like InvertedDoublePendulum, and also on classical control task like Cartpole. In addition, we have enlarged the friction range from the original $[0.5, 1.1]$ to $[0.2, 2.5]$ to form a new setting, denoted as HalfcheetahBroadRange. \\n\\n2) In Fig. 1 of the rebuttal version, more iterations have been run to make sure that each algorithm converges.\\n\\n3) We have also shown the comparison results on unseen environments for other benchmarks (e.g., Walker2D, HalfCheetahBroadRange, InvertedDoublePendulum and Cartpole), to provide empirical support for the generalization capability of MRPO.\"}",
"{\"title\": \"Author Response to the Gap Between Algorithm 2 and Final Practical Implementation\", \"comment\": \"Comment 3: \\\"It would be great if the authors can explain the gap between algorithm 2 and your practical implementation of using the 10\\\\% worst-case environments. If so, then the algorithm the authors use in the experiments can be viewed as directly select top performance trajectories to perform policy optimization, which I think the final algorithm is not consistent with your algorithm presented in the methodology part (please correct me if I am wrong about the final algorithm).\\\"\", \"response\": \"1) Reason for using the 10\\\\% worst-case environments: In the robot control tasks that we tested in the experiments, the environment dynamics were determined by multiple factors, such as the density and friction. Under this circumstance, a policy may perform poorly in multiple different tuples of density and friction. In other words, a single worst-case environment usually may not represent all the environments where the current policy performs very poorly. Taking this into account, we therefore used the $10\\\\%$ worst-case environments in the practical implementation to replace using of the single worst-case environment.\\n\\n\\n2) Trajectory selection criterion and consistency with the methodology: In the final practical implementation, we did not select trajectories by only referring to their performance. Instead, we selected the subset of trajectories for training by referring to Line 9 in Algorithm 2, where the use of single worst-case environment was replaced by using the $10\\\\%$ worst-case environments to calculate $E' (p^k_w,\\\\pi_k)$ for the aforementioned reason. From the expression of $E'(p_i,\\\\pi_k)=\\\\sum_{j=0}^{L-1}G(\\\\tau_{i,j}\\\\vert p_i)/L -\\\\kappa\\\\Vert p_i - p^k_w \\\\Vert$, it can be seen that the trajectory selection is based on a trade-off between the performance and the distance to the worst-case environment, as we described in detail in the paragraph under Equation (5). Please also note that Algorithm 2 is consistent with Algorithm 1, with the only difference being applying the Lipschitz assumption. Therefore, we believed that the final practical algorithm was also consistent with Algorithm 1 in the methodology part.\\n\\n\\n3) Revision in the rebuttal version: In the beginning of Section 4, we have tried our best to clarify the reason why we used the 10\\\\% worst-case environments instead of a single worst-case environment for practical implementation.\"}",
"{\"title\": \"Author Response to Lipschitz Assumption and Lipschitz Constant Tuning\", \"comment\": \"Comment 2: \\\"In addition, the algorithm 1 proposed by authors requires to calculate a model discrepancy between $p_w$ and other environments $p_i \\\\sim P$, which is impractical to estimate by samples if the discrepancy is total variation distance. To achieve this, the authors assumes that the transition is lipschitz, with the requirement of tunning lipschitz constant.\\\"\\n\\n\\\"How do you choose or estimate the lipschitz constant? If the lipschitz constant is not right, then the bound will not given any practical guidance here.\\\"\", \"response\": \"1) Reason to make the Lipshitz assumption: In robot control tasks, classical optimal control methods commonly utilize the differential equation to formulate the dynamic model, which then indicates that the transition dynamics model is $L_p$-Lipschitz and this formulated dynamic function can be used to estimate the Lipschitz constant $L_p$.\\n\\n For example, inverted double pendulum, one of our newly added test environments, can be viewed as a two-link pendulum system (Chang et al., Neural Lyapunov control, 2019). To simplify the analysis, we illustrate here a single inverted pendulum, which is the basic unit that forms the inverted double pendulum system. The single inverted pendulum has two state variables $\\\\theta$ and $\\\\dot{\\\\theta}$, and one control input $u$, where $\\\\theta$ and $\\\\dot{\\\\theta}$ represent the angular position from the inverted position and the angular velocity, respectively, and $u$ is the torque. The system dynamics can therefore be described as\\n\\\\begin{align}\\n \\\\ddot{\\\\theta} = \\\\frac{mgl \\\\sin{\\\\theta} + u -0.1\\\\dot{\\\\theta}}{m l^2},\\n\\\\end{align}\\nwhere $m$ is the mass, $g$ is the Gravitational acceleration, and $l$ is the length of pendulum. In our setting, we may choose $m$ as the variable environment parameter $p$. Since the above system dynamics are differentiable w.r.t. $m$, it can be verified that the maximum value of the first derivative of the system dynamic model can be chosen as the Lipschitz constant $L_p$.\\n\\n2) Relation between the bound in Theorem 1 and the Lipshitz assumption: Guided from the bound proposed in Theorem 1, we formulated a constrained optimization problem in (4), where the second constraint constrained the expected distance over all the possible environments to the worst-case environment. Then, based on our theoretical derivation in the proof of Theorem 3 in Appendix A.2, we used TV distance between two environments to measure this expected distance, which was hard to estimate in practice. Alternatively, we were looking for a substitution variable that satisfied the following two properties: i) positively correlated to the TV distance and ii) easy-to-access. Since the environment dynamics were determined by the environment parameters which satisfied the Lipschitz continuity condition in many robot control tasks, we therefore utilized the distance between environment parameters to reflect the TV distance as in (8) and (9). \\n\\n3) Relation between the Lipschitz constant and the hyperparameter $\\\\kappa$: From (3), it can be seen that the second term of the bound provided in Theorem 1 is not only dependent on the expected distance $\\\\epsilon(p_w \\\\Vert p)$, but also on $\\\\frac{2|r|_{\\\\max}\\\\gamma}{(1-\\\\gamma)^2}$. Therefore, in the practical implementation (Algorithm 2, Line 7), the Lipschitz constant was integrated into the hyperparameter $\\\\kappa$ which was the tunable hyperparameter during the experiment. \\n\\n Theoretically, in Algorithm 2, $\\\\kappa$ is a hyperparameter that controls the trade-off between the expected cumulative discounted reward $\\\\eta(\\\\pi_k|p_i) $ and distance $\\\\Vert p_i - p^k_w \\\\Vert$ to the worst-case environment. A larger $\\\\kappa$ means that the policy cares more about the poorly-performing environments, while a smaller $\\\\kappa$ would par more attention to the average performance. As empirical evaluation, we conduct experiment of MRPO on Hopper with different choices of hyperparameter $\\\\kappa$. The training curves of both average return and the 10\\\\% worst-case return are shown in Figs. 5(a) and 5(b) of the rebuttal version, respectively. It can be verified that for the fixed value choice of $\\\\kappa$, the curve of $\\\\kappa=5$ outperforms the curves of $\\\\kappa=20, 40, 60$ in terms of the average return in Fig. 5(a), while the curve of $\\\\kappa=60$ outperforms the curves of $\\\\kappa=5, 20, 40$ in terms of the 10\\\\% worst-case return in Fig. 5(b). In practical implementation, we gradually increase $\\\\kappa$ to a fixed high value. It can therefore strike a tradeoff between average return and 10\\\\% worst-case return, as shown in Figs. 5(a) and 5(b). Therefore, even if there is estimation error on $L_p$, it can be compensated in practice by tuning the hyperparameter $\\\\kappa$.\\n\\n4) Revision in the rebuttal version: We have added Appendix A.9 to analyze the Lipschtz assumption, and Appendix A.10 to study the hayperparameter $\\\\kappa$.\"}",
"{\"title\": \"Author Response to Uncertainty Caused by the Finite Samples\", \"comment\": \"First of all, we would like to thank the reviewer for providing the detailed comments. Please see below our detailed responses to these comments, and corresponding revisions in the rebuttal version of our paper.\\n\\n$\\\\\\\\\\\\\\\\$\", \"comment_1\": \"\\\"the author provide a lower bound for the worst-case performance, ..., the lower bound presented in the paper does not take the uncertainty caused by the finite samples into account, which will not give guidance to design empirical algorithms since the variance of the mc return of the policy is large.\\\"\\n\\n\\\"I feel that selecting the worst-case environment is one of the key challenging of the proposed algorithm. I did not find the description how to choose the $p_w$ given a set of environments ${p_i}^{M-1}_{i=0}$. If the authors means that the expected return of the a single trajectory can be used to select the worst-case environment, then how do your algorithm can guarantee the expected return of the sampled trajectories is the exact performance of the environment? The author did not give finite sample high confidence upper bound for empirical mc estimation, and the selection of the worst case environment would be hard to implement in practical settings? \\\"\", \"response\": \"1) Description on selection of $p_w$: In Theorem 1, the worst-case environment parameter $p_w$ needs to be selected according to the expected cumulative discounted reward $\\\\eta(\\\\pi\\\\vert p)$ of environment $p$. Please note that in the rebuttal version, following Reviewer 3's suggestion, we have changed the notation from $\\\\rho(\\\\pi\\\\vert p)$ in the original submission to $\\\\eta(\\\\pi\\\\vert p)$ to denote this expected cumulative discounted reward, such that possible confusion with the environment parameter $p$ is avoided. However, $\\\\eta(\\\\pi\\\\vert p)$ is infeasible to get in the practical implementation. Therefore, as a commonly used alternative approach as in (Rajeswaran et al., 2017), we used in Algorithms 1 and 2 the mean of the cumulative discounted reward of $L$ sampled trajectories $\\\\sum_{j=0}^{L-1}G(\\\\tau_{i,j}|p_i)/L$ to approximate the expectation $\\\\eta(\\\\pi| p_i)=E_{\\\\tau}[G(\\\\tau| p_i) ]$ of any environment $p_i$, by using Monte Carlo method. In the original submission, we followed the setting in (Rajeswaran et al., 2017) and let $L=1$, i.e., $G(\\\\tau_{i,1}\\\\vert p_i)$ of a single trajectory $\\\\tau_{i,1}$ was used to estimate $\\\\eta(\\\\pi\\\\vert p_i)$. We then determined the worst-case environment $p_w$ based on $G(\\\\tau_{i,1}\\\\vert p_i)$ of a given set of environments ${p_i}^{M-1}_{i=0}$. In the following, we will analyze the impact of $L$ on the estimation error.\", \"reference\": \"Aravind Rajeswaran, Sarvjeet Ghotra, Balaraman Ravindran, and Sergey Levine. EPOpt: Learning robust neural network policies using model ensembles. 2017.\\n\\n\\n2) Theoretical analysis of the impact of $L$: Referring to Chebyshev's inequality, for any environment $p_i$ and any $\\\\varepsilon \\\\geq 0$, with probability of at least $1-\\\\frac{\\\\sigma^2}{L\\\\varepsilon^2}$, we have\\n$ \\\\left\\\\vert \\\\frac{\\\\sum_{j=0}^{L-1}G(\\\\tau_{i,j}\\\\vert p_i)}{L} -\\\\frac{\\\\sum_{j=0}^{L-1}E_{\\\\tau_{i,j}}[G(\\\\tau_{i,j}\\\\vert p_i)]}{L} \\\\right\\\\vert = \\\\left\\\\vert \\\\frac{\\\\sum_{j=0}^{L-1}G(\\\\tau_{i,j}\\\\vert p_i)}{L} -\\\\eta(\\\\pi\\\\vert p_i)\\\\right\\\\vert \\\\leq \\\\varepsilon, $ where $\\\\sigma=Var(G(\\\\tau\\\\vert p_i))$ is the variance of trajectory $\\\\tau$'s return. From the above equation, we find out that the variance of the return does affect the MC estimation of $\\\\eta(\\\\pi\\\\vert p)$ and a larger $L$ can guarantee a higher probability for the convergence of $\\\\sum_{j=0}^{L-1}G(\\\\tau_{i,j}\\\\vert p_i)/L$ to $\\\\eta(\\\\pi\\\\vert p_i)$. \\n\\n\\n3) Empirical evaluation of the impact of $L$: In practice, we have conducted experiment of MRPO on Hopper with different choices of $L$. We found out that the a larger $L$ would not greatly affect the performance in terms of average return as shown in Fig. 4(a) in the rebuttal version, but would significantly increase the training time as shown in Fig. 4(b) in the rebuttal version. In other words, for the same number of training iterations, a larger $L$ would consume significantly longer running time than a smaller $L$, while the performance is similar. Therefore, we set $L=1$ in our practical implementation of MRPO to strike a trade-off between the approximation accuracy and time complexity in training.\\n\\n\\n4) Revision in the rebuttal version: We have modified Algorithms 1 and 2 to clarify how to select the worst-case environment $p_w$. We have also added Appendix A.8 to analyze the Monte Carlo Estimation of $\\\\eta(\\\\pi\\\\vert p)$, and the impact of number of sampled trajectories $L$ both theoretically and empirically.\"}",
"{\"title\": \"Review for Monotonic Robust Policy Optimization\", \"review\": \"In this paper, the authors proposed a more robust policy optimization method for domain randomization, by constraining the gap between the average performance of the whole range of environments and the performance of the worst-case environments. To achieve this, the author provide a lower bound for the worst-case performance, though the lower bound does not take the uncertainty of the finite samples into account.\\n\\nIn addition, the algorithm 1 proposed by authors requires to calculate a model discrepancy between $p_{w}$ and other environments $p_{i} \\\\sim P$, which is impractical to estimate by samples if the discrepancy is total variation distance. To achieve this, the authors assumes that the transition is lipschitz, with the requirement of tunning lipschitz constant. For empirical evaluation, the author compare with PPO with DR and PW-DR on three continuous benchmark mujoco task, which demonstrate that MRPO has some advantage over the other two algorithms.\", \"the_followings_are_my_detailed_comments_and_questions\": [\"I feel that selecting the worst-case environment is one of the key challenging of the proposed algorithm. I did not find the description how to choose the $p_{w}$ given a set of environments $\\\\{ p_{i} \\\\}_{i=0}^{M-1}$. If the authors means that the expected return of the a single trajectory can be used to select the worst-case environment, then how do your algorithm can guarantee the expected return of the sampled trajectories is the exact performance of the environment? The author did not give finite sample high confidence upper bound for empirical mc estimation, and the selection of the worst case environment would be hard to implement in practical settings?\", \"How do you choose or estimate the lipschitz constant? If the lipschitz constant is not right, then the bound will not given any practical guidence here.\", \"It would be great if the authors can explain the gap between algorithm 2 and your practical implementation of using the 10% worst-case environments. If so, then the algorithm the authors use in the experiments can be viewed as directly select top performance trajectories to perform policy optimization, which I think the final algorithm is not consistent with your algorithm presented in the methodology part (please correct me if I am wrong about the final algorithm).\", \"The experiments do not give strong empirical support for the new algorithm. The authors only evaluate on three environments, which I think is not enough, can the authors add more mujoco benchmarks? Also from the current results, I can not conclude that MRPO is better than PPO-DR since the evaluated domain is only three. Further, can the authors run more iterations to make sure the algorithms converge? The curves now presented in the paper did not converge.\", \"Overall I think there is a gap between the methodology presented in the paper and the final practical algorithm, and the lower bound presented in the paper does not take the uncertainty caused by the finite samples into account, which will not give guidance to design empirical algorithms since the variance of the mc return of the policy is large. Finally the evaluation of the algorithms have not been conducted thoroughly.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"interesting algorithm with theoretical support\", \"review\": \"summary:\\nThis paper introduces Monotonic Robust Policy Optimization (MRPO), an RL algorithm that aims to jointly optimize policy and domain sampling distribution, with the goal of improving policy performance for both average and worst-case scenarios and addressing the model discrepancy between the training and target environments. They derive a lower bound for the worst-case performance, which comprises the average performance, policy change, and the statistical distance between the worst and average case environments. A TRPO-like monotonic performance improvement guarantee is provided for the worst-case expected return. Finally, a practical approximation to MRPO is proposed, which imposes the assumption on Lipschitz continuity with respect to the environment parameters and circumvents the estimation of total variation distance between the worst-case environment and the sampled environment. Experiments are conducted on three control tasks with diverse transition dynamics parameters, where MRPO could improve both average and worst-case performance in the training environments, and it shows better generalization to the unseen test environments than baseline algorithms.\", \"pros\": [\"The theoretical analysis is provided, which shows the relationship between the worst-case and average performance for the first time.\", \"The algorithm is backed by the theoretical guarantee of monotonic worst-case performance improvement.\"], \"cons\": [\"The assumption that the transition dynamics model is L-Lipschitz with respect to the environment parameter seems to be strong.\", \"Some of the experimental results are not convincing. For example, in Figure 1f, MRPO underperforms DR, even if DR does not consider the worst-case performance during optimization at all.\"], \"comments_and_questions\": [\"How natural is the model's Lipschitz assumption? Are many real-world problems satisfying this assumption?\", \"In Figure 1, what does the shaded-area stand for? standard deviation? standard error? Also, it is not clear that MRPO outperforms other baselines statistically significantly.\", \"It seems that two dense layers are used to construct the policy and value networks in the experiments. Why was the recurrent (e.g. LSTM) policy not used? Since the recurrent policy can implicitly embed system identification, I think the performance of the DR baseline could have been improved with the use of the recurrent policy. It would be great to see the performance comparison when the recurrent policy is used for MRPO and baselines.\", \"For the experiments on generalization to unseen environments, only the results for Hopper is provided, which may not be sufficient to demonstrate the behavior of each algorithm. It would be great to provide the heatmap results for other domains, i.e. Walker and HalfCheetah.\", \"In Theorem 1, is $p_w$ is the worst-case parameter for $\\\\pi$? or for $\\\\tilde \\\\pi$? It would be good if notation presents the dependence on the policy of $p_w$, e.g. $p_w^\\\\pi$.\", \"In Algorithm 2, line 6: how can $p_w^k$ be found? (even before completing sampling the trajectories for each environment)\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An improvement of EPOpt based on a TRPO-like lower bound on worst-case cumulative policy reward\", \"review\": \"Motivated by the domain transfer problem in RL where policies are trained on simulators that may not reflect perfectly the reality, this paper propose a new policy optimization algorithm named MRPO that is expected to be robust to changes in the environment's dynamic.\\nThe formal setting and the notations are the same as in EPOpt (Rajewara 2017): each variant of the environment is an MDP parametrized by a parameter p and the trained policy is expected to be robust to (adversarial) changes on p.\\nInstead of focusing on worst cases with a CEM-like procedure on p distribution like in EPOpt, the authors propose to divert the TRPO approximation bound into a safety bound.\\nTheorem 1 gives a TRPLO-like lower bound, Theorem 2 show that optimizing for the LHS of Theorem 1 inequality may not degrade the wort-case reward.\\nThe experiments study both the 10% word-case returns and the avarge returns. They show that MRPO improves clearly from EPOpt (renamed PW-DR for the occasion), the improvement against simple uniform domain randomization is less significant.\\n\\nProbably because this paper relies on notions gathered from both (Rajewara 2017) and (Schulman et al. 2017), I found the 8 pages of the main paper quite dense and hard to follow. The proofs in the appendix are however clearly detailed and easy to read. I checked integrally the proof of Theorem 1/3 without any difficulty.\\n\\nThis domain randomization model formally equivalent to a single (continuous) MDP where the the environment's dynamic is parametrized by the initial state distribution (for instance by enriching the MDP states by the p parameter).\\nIt is therefore unclear to me that a specific algorithm is required for the specific case of parametrized MDPs.\\nWhat would be the performance of a generic CVaR algorithm like \\\"Risk-constrained reinforcement learning with percentile risk criteria\\\" (Chow et al. 2017) on this setting ?\\nI found the idea of diverting the TRPO approximation bound into a safety bound appealing. Applied to a single MDP it could lead to a CVaR variant of TRPO.\", \"minor_remarks\": \"p3 detalis -< details\\nI found the \\\\rho notation for cumulative reward a bit confusing especially when p is involved in the equations, maybe a \\\\nu instead would improve readability ?\\nExperiments on non-free systems like Mujoco are not easily reproducible. A few experiments on free-to-use environments would improve the reproducibility of the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Model discrepancy between enviornments plays a role in generalization\", \"review\": \"This paper focuses on the generalization issue in reinforcemetn leanring, specifically aims to address the problems of domain randomization(DR) technique. Different from standard DR which treats all the sample environment as equal, this paper proposed to improve the performance over all possible environments and the worst-case environment concurrently. This paper theoretically derives a lower bound for the worst-case performance of a given policy over all environment, and in practical, the proposed method, monotonic robust policy optimization(MRPO) carries out a two-step optimization to imporve the lower bound such as to maximize the averaged and worst-case policy perfomance.\\n\\n\\nThis paper is well written and the key concept is clearly introduced. The Theorem.1 makes the connections between the averaged and the worst-case performance, such that maximizing the worst-case performance can be solved by maximizing the averaged performance problem with some trajectories from environments with both poor and good-enough performance. The emprical results also support the theorical analysis.\\n\\n1. For Lemma 1: The conclusion is based on the assumption that the the worst case $\\\\rho(\\\\pi|p_w) - \\\\max_p \\\\rho(\\\\pi|\\\\rho)$ is bounded (Proof A.1). However, such equation does not strictly holds without bounded reward function. The author should stated the condition.\\n\\n2. About the monotonic worst-case performance improvement theorem, the proof says \\\"... the approximation is made under the assumption that the worst-case environment between two iterations are similar, which stems from the trust region constraint we impose on the update step between current and new policies...\\\", however, the trust region constraint can only limit the difference between policy updates, the similarity between worst-case environments can not be promised.\\n\\n3. In theorem 2, the fomula (50) and (51) in the proof, is this approximation reasonable? Since the policy is updated, the worst-case environment may have changed a lot. Similarly, if the updated policy changes very little, can we make $\\\\pi_{new}=\\\\pi_{old}$ ? \\n\\n4. The experiments are slightly inadequate, the effects of tunable hyperparameter k should be further analyzed; In unseen environment, the MRPO algorithm is only tested on one environment.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
ARQAdp7F8OQ | Brain-like approaches to unsupervised learning of hidden representations - a comparative study | [
"Naresh Balaji",
"Anders Lansner",
"Pawel Herman"
] | Unsupervised learning of hidden representations has been one of the most vibrant research directions in machine learning in recent years. In this work we study the brain-like Bayesian Confidence Propagating Neural Network (BCPNN) model, recently extended to extract sparse distributed high-dimensional representations. The saliency and separability of the hidden representations when trained on MNIST dataset is studied using an external linear classifier and compared with other unsupervised learning methods that include restricted Boltzmann machines and autoencoders. | [
"neural networks",
"bio-inspired",
"brain-like",
"unsupervised learning",
"structural plasticity"
] | Reject | https://openreview.net/pdf?id=ARQAdp7F8OQ | https://openreview.net/forum?id=ARQAdp7F8OQ | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"8Hgnvjzmk2F",
"plrN1ZYs5vd",
"YjsVaLZllIk",
"5uIcAaEsCO",
"Ptdh0Yw_eLb",
"hdeeoO9df1Q",
"-BW0o95zIy-",
"kodbT9JI86",
"PlpnYIYfN53",
"y3cd0C6bQfK"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040471517,
1606255366573,
1605820831524,
1605820768356,
1605820717042,
1605820561087,
1604636588877,
1604422376005,
1603925167640,
1603827979304
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3727/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3727/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3727/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3727/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3727/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3727/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3727/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3727/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3727/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper conducts a comparison between a small set of models (4 in total) for unsupervised learning. Specifically, the authors focus on comparing Bayesian Confidence Propagating Neural Networks (BCPNN), Restricted Boltzmann Machines (RBM), a recent model by Krotov & Hopfield (2019) (KH), and auto-encoders (AE). The authors compare trained weight distributions, receptive field structures, and linear classification on MNIST using the learned representations. The first two comparisons are essentially qualitative comparisons, while on classification accuracy, the authors report similar accuracy levels across the models.\\n\\nThis paper received mixed reviews. Reviewers 4 and 5 felt it did not contribute enough for acceptance, while Reviewers 2 & 3 were more positive. However, as noted by a few of the reviewers, this paper does not appear to achieve much, and provides very limited analysis and experiments on the models. It isn't introducing any new models, nor does it make any clear distinctions between the models examined that would help the field to decide which directions to pursue. The experiments add little insight into the differences between the models that could be used to inform new work. Thus, the contribution provided here is very limited. \\n\\nMoreover, the motivations in this paper are confused. In general, it is important for researchers at the intersection of neuroscience and machine learning to decide what their goal is when building and or comparing models. Specifically, is the goal: (1) finding a model that may potentially explain how the brain works, or (2) finding better machine learning tools?\\n\\nIf the goal is (1), the performance on benchmarks is less important. However, clear links to experimental data, such that experimental predictions may be possible, are very important. That's not to say that a model must be perfectly biologically realistic to be worthwhile, but it must have sufficient grounding in biology to be informative for neuroscience. However, in this manuscript, as was noted by Reviewer 4, the links to biology are tenuous. The principal claim for biological relevance for all the models considered seems to be that the update rules are local. But, this is a loose connection at best. There are many more models of unsupervised learning with far more physiological relevance that are not considered here (see e.g. Olshausen & Field, 1996, Nature; Zylberberg et al. 2011, PLoS Computational Biology; George et al., 2020, bioRxiv: https://doi.org/10.1101/2020.09.09.290601). It is true that some of these models use non-local information, but given the emerging evidence that locality is not actually even a strict property in real synaptic plasticity (see e.g. Gerstner et al., 2018, Frontiers in Neural Circuits; Williams & Holtmaat, 2018, Neuron; Banerjee et al., 2020, Nature), an obsession with rules that only use pre- and post-synaptic activity is not even clearly a desiderata for neuroscience.\\n\\nIf the goal is (2), then performance on benchmarks, and some comparison to the SotA, is absolutely critical. Yet, this paper does none of this. Indeed, the performance achieved with the four models considered here is, as noted by Reviewer 4, very poor. In contrast, there have been numerous advances in unsupervised (or \\\"self-supervised\\\") learning in ML in recent years (e.g. Contrastive Predictive Coding, SimCLR, Bootstrap Your Own Latent, etc.), all of which achieve far better results than the four models considered here. Thus, the models being compared here cannot inform machine learning, as they do not appear to provide any technical advances. Of course, some models may combine goals (1) & (2), e.g. seeking increased physiological relevance while also achieving decent benchmark performance (see e.g. Sacramento et al., 2018, NeurIPS), but that is not really the situation faced here, as the models considered have little biological plausibility (as noted above) and achieve poor performance at the same time.\\n\\nAltogether, given these considerations, although this paper received mixed reviews, it is clearly not appropriate for acceptance at ICLR in the Area Chair's opinion.\"}",
"{\"title\": \"Additional experiments regarding point 5\", \"comment\": \"In the experiments, we compare the BCPNN model with modular hidden architecture (hypercolumns and minicolumns) with binary RBMs, binary AEs, and KH model with ReLU units. Following the review, we have also experimented with RBM and AE networks with softmax units to have similar hidden layer representation as BCPNN. The experiment involved fixing the size of the hidden layer to 3000 units, while changing the ratio of number of MCs per HC to the number of HCs. The two tables below show the accuracy (over 10 trials) of the linear classifier when trained on the hidden representations. The results in the first column are from hidden layers with only binary units and we denote this as 3000 : 1. We see that performance of both RBM and AE decreases with the increasing ratio of modularisation (more MCs per HC and fewer HCs). The highest accuracy is obtained for the least modular layer. When compared at the same ratio of 3.34 used for BCPNN (training accuracy of 100\\u00b10 and test accuracy of 97.77\\u00b10.12) both RBM and AE perform poorly.\", \"table_1\": \"RBM\\n\\n| #HC : #MCs per HC | 3000 : 1 | 1000 : 3 | 500 : 6 | 300 : 10 | 100 : 30 | 50 : 60 | 30 : 100 |\\n|:-----------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|\\n| Ratio | 0.00034 | 0.003 | 0.012 | 0.034 | 0.30 | 1.2 | 3.34 |\\n| train (%) | 98.92 \\u00b1 0.04 | 99.17 \\u00b1 0.05 | 98.69 \\u00b1 0.07 | 97.28 \\u00b1 0.05 | 91.88 \\u00b1 0.11 | 83.93 \\u00b1 0.34 | 77.59 \\u00b1 0.56 |\\n| test (%) | 97.67 \\u00b1 0.10 | 97.68 \\u00b1 0.03 | 97.25 \\u00b1 0.05 | 96.14 \\u00b1 0.03 | 91.63 \\u00b1 0.14 | 84.18 \\u00b1 0.35 | 77.94 \\u00b1 0.60 |\", \"table_2\": \"AE\\n\\n| #HC : #MCs per HC | 3000 : 1 | 1000 : 3 | 500 : 6 | 300 : 10 | 100 : 30 | 50 : 60 | 30 : 100 |\\n|:-----------------:|:-------------:|:------------:|:------------:|:-------------:|:-------------:|:--------------:|:------------:|\\n| Ratio | 0.00034 | 0.003 | 0.012 | 0.034 | 0.30 | 1.2 | 3.34 |\\n| train (%) | 100.00 \\u00b1 0.00 | 99.56 \\u00b1 0.02 | 98.72 \\u00b1 0.02 | 98.96 \\u00b1 0.18 | 97.74 \\u00b1 0.08 | 95.13 \\u00b1 0.20 | 93.18 \\u00b10.13 |\\n| test (%) | 97.78 \\u00b1 0.09 | 97.71 \\u00b1 0.09 | 97.68 \\u00b1 0.06 | 97.10 \\u00b1 0.22 | 96.57 \\u00b1 0.20 | 94.44 \\u00b1 0.30 | 92.95 \\u00b1 0.27 |\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for the comments and valuable feedback.\\n\\nThe KH model as described in the original work is quite detailed and a proper description may not fit within this article. We therefore stated the key principles involved in the \\u201cRelated Works\\u201d section. As for AE and RBM, we assumed prior knowledge of these models, since they are widely used and by now a standard in machine learning. It is true that our comparisons of BCPNN with KH, RBM, and AE showed BCPNN outperforms KH, while there is no difference with RBM and AE. However, our objective is to compare not just the accuracy performance, but also the nature of the representations learnt (for instance receptive fields).\", \"1\": \"Thank you for pointing it out! The original KH model indeed reported higher accuracy compared to our version of the same model. This is because in the KH model, for the supervised classifier on the learned representation, a non-linear model (exponentiated ReLU activation along with an exponentiated loss function) was used. We decided to use the same simple classifier (with softmax activation and cross-entropy loss) for all the models evaluated as we were interested in hidden representations being label-wise linearly separable. We expected this to give a fair comparison. A short version of this comment is added as a footnote in the manuscript.\", \"2\": \"This bias regulation is related to the so-called \\u201cintrinsic plasticity\\u201d of cortical neurons. It is known that they can adapt their baseline activation based on previous activity [see e.g. Frans\\u00e9n, E., Tahvildari, B., Egorov, A. V., Hasselmo, M. E., and Alonso, A. A. (2006). Mechanism of graded persistent cellular activity of entorhinal cortex layer V neurons. Neuron 49, 735\\u2013746. doi: 10.1016/j.neuron.2006.01.036]. In a more detailed spiking model in the NEST simulator, it has been implemented as an adaptive potassium channel. Different neuron types can be expected to do this in different ways. Though the kind of regulation we describe here is hypothetical, it is local computation within the neuron and can be considered biologically plausible.\", \"3\": \"We will investigate the effects of changing this ratio as well as size of the hidden layer. However, we won\\u2019t be able to include it in this work.\", \"4\": \"The \\u201chybrid\\u201d representations are directly the result of the BPCNN modular architecture. Since different hypercolumns focus on different parts of the image and learn features over the receptive field, they form a distributed representation. The minicolumns within each hypercolumn learn prototypical samples (clustering). We consider this modular architecture of the layer to be generalizable to multiple layers in the cortical hierarchy. The columnar circuit (canonical microcircuit) and architecture (minicolumns and hypercolumns) is considered to be conserved throughout the cortex since the work of Mountcastle [Mountcastle, V.B., 1997. The columnar organization of the neocortex. Brain: a journal of neurology, 120(4), pp.701-722].\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for the comments and valuable feedback.\\n\\nWe acknowledge that one-layer architecture with MNIST is quite limited. However, given the nature of constraints handled: unsupervised learning with local learning rules, there is little prior work that accomplishes the same performance. We are also working on applying our model to harder datasets and extending the network to deeper architecture. \\n\\nPoint 1,2,3,4: Corrected\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"We thank the reviewer for the review and valuable feedback.\\n\\n1,2: This works concerns evaluating different models with the criteria of \\u201cbrain-like\\u201d as we define it in terms of local learning rules and unsupervised learning. We do acknowledge that recent work, some of which we cited, do a more extensive study of such methods. However, our objective here is to compare not just the accuracy performance, but also the nature of the representations learnt (for eg. receptive fields), specifically for unsupervised learning methods. In our comparative study, we also included popular neural network models like RBMs and AEs as they are unsupervised models with local learning. We acknowledge the limitation of evaluating only on MNIST, but extending such brain-like neural networks to harder datasets is still an active area of research and we work on it.\", \"3\": \"Thank you for pointing it out! The original KH model indeed reported higher accuracy compared to our version of the same model. This is because in the KH model, for the supervised classifier on the learned representation, a non-linear model (exponentiated ReLU activation along with an exponentiated loss function) was used. We decided to use the same simple classifier (with softmax activation and cross-entropy loss) for all the models evaluated as we were interested in hidden representations being label-wise linearly separable. We expected this to give a fair comparison. A short version of this comment is added as a footnote in the manuscript.\", \"4\": \"It is true that Illing et al. (2019) showed random projections and local random Gabor filters outperform many single-layer unsupervised as well as supervised learning models like backprop and feedback-alignment. As we mentioned earlier, our objective here is to compare not just the accuracy performance, but also the nature of the representations learnt. Also, local projections would be limited to image-like data with a local correlation structure, since localized receptive fields can only be hard-coded for such data. All the models we compare do not make this restriction for learning the weight parameters (and structural plasticity) and are thus more general in nature. Our model with structural plasticity would find the pixel dependencies and give the same performance even if MNIST images were transformed by a permutation operation - the same for all images. In for instance olfactory sensory data there is not a topology like in e.g. the visual system. A biologically plausible learning architecture should be able to handle such data as well.\", \"5\": \"The normalization within a hypercolumn using a softmax operation is not a drawback or biologically unrealistic as we see it. Mapping our model to the cortex, activity of each minicolumn represents the probability of an event and is the average firing rate of around hundred excitatory neurons (pyramidal neurons), while the inhibitory neurons (basket cells) project locally to all the minicolumns within the same hypercolumn and provide local lateral inhibition and competition. In the visual system it has been suggested to provide local divisive normalization [see e.g. Carandini M, Heeger DJ, Movshon JA. Linearity and normalization in simple cells of the macaque primary visual cortex. J Neurosci. 1997;17(21):8621-8644. doi:10.1523/JNEUROSCI.17-21-08621.1997]. We consider this computation biologically plausible, and chose to model it using a simple softmax operation in the interest of abstraction.\\n\\nThe \\u201chybrid\\u201d representations are directly the result of the BPCNN modular architecture. Since different hypercolumns focus on different parts of the image and learn features over the receptive field, they form a distributed representation. The minicolumns within each hypercolumn learn prototypical samples (clustering). We consider this modular architecture of the layer to be generalizable to multiple layers in the cortical hierarchy. The columnar circuit (canonical microcircuit) and architecture (minicolumns and hypercolumns) are considered to be conserved throughout the cortex since the work of Mountcastle [Mountcastle, V.B., 1997. The columnar organization of the neocortex. Brain: a journal of neurology, 120(4), pp.701-722].\"}",
"{\"title\": \"Response to AnonReviewer5\", \"comment\": \"We thank the reviewer for the comments and valuable feedback.\", \"1\": \"We now included a figure (Fig. 1) explaining BCPNN in terms of the graphical model.\", \"2\": \"The derivation of BCPNN does include strong assumptions of factorial likelihood while calculating the posterior probability of hidden variables. This naive Bayes assumption is also comparable to the widely used artificial neuron models (McCulloch-Pitts) that assume inputs to be linearly separable. Although it is certainly the case that input data is neither conditionally independent or linearly separable, these assumptions allow for avoiding intractable computations. Furthermore, the naive Bayes assumption becomes lesser of a problem as we get better estimates of hidden variables.\\n\\nWe removed the second assumption that the input variables are factorial (from the older version of manuscript), as this is unnecessary in computing the posterior (it is absorbed while normalizing with softmax). It was written to be consistent with the previous BCPNN models, but we removed it now to avoid any further confusion.\", \"3\": \"This assumption is common in probabilistic modeling with neural networks. The indicator function is a binary valued (either 0 or 1), and is replaced with the real-valued data, which is interpreted as p(x). For identical treatment in Boltzmann machines see http://www.scholarpedia.org/article/Boltzmann_machine#Mean_field_Boltzmann_machines\", \"4\": \"We were not clear about this. What we intended to say was \\u201cOne disadvantage of a probabilistic model is that doing simple exact inference and learning on distributed representations is intractable and forces approximate solutions (like numerous sampling or variational methods)\\u201d. This is now corrected.\", \"5\": \"We used binary units for RBMs and AEs since it is widely used and well tested. Also, softmax units cannot be used in the KH model since it uses exponentiated ReLU units.\"}",
"{\"title\": \"Lacking in clarity\", \"review\": \"Summary:\\nThis paper proposes a set of biologically plausible update rules that can be used to compute latent representations. The paper presents a number of heuristics to set hyper-parameters and train the proposed model. The model is compared to RBMs, auto-encoders and another recently proposed biologically-plausible model.\", \"pros\": [\"The method explores a new kind of model motivated by having biologically plausible update rules.\", \"The experimental results include an interesting comparison of the learned features.\"], \"cons\": [\"The paper places emphasis on interpreting the update rules as Bayesian confidence propagation. Yet, the underlying probabilistic graphical model is not clearly described. It would be useful to have a figure in the paper that describes the model, including whether the model is directed or undirected, which variables are observed or unobserved, if the model is directed what is the generative model, etc.\", \"Two assumptions are mentioned about the graphical model : \\\\\\\\( P(X_1,..,X_N) = \\\\prod_i P(X_i) \\\\\\\\), and \\\\\\\\( P(X_1,..,X_N|Y_j) = \\\\prod_i P(X_i|Y_j) \\\\\\\\). Unless I misunderstood, the first assumption is saying that each dimension in the input data \\\\\\\\(X\\\\\\\\) is independent of the others. This is a very strong assumption and makes the model weak. The paper does not include an explanation about why these assumptions make sense, or how they influence the model's inductive bias relative to other probabilistic models.\", \"The derivation of the update rules is also unclear. The approximation that the indicator function \\\\\\\\(I(x_i=x^D_i)\\\\\\\\) can be replaced by its expected value \\\\\\\\(P(x^D_i)\\\\\\\\) is hard to understand. In particular, I could not understand how to parse Eq (4) which has a sum over \\\\\\\\(x_i\\\\\\\\). Is this still dependent on the input, given that the indicator function has been replaced by its expectation ? It would be good to have a more clear derivation of the update rules starting from the probabilisitic model.\", \"The paper makes some very general assertions that do not add to the point being made. For example, \\\"One disadvantage of probabilistic models is that the known methods do not scale well in practice.\\\" Models such as VAEs are probabilistic models which scale quite well to high-dimensional inputs and large amounts of data.\", \"In the introduced model, each HC seems to have a similar representational capacity as a softmax hidden unit. Therefore, instead of comparing to RBMs and AEs with sigmoid units, comparisons to RBMs and AEs with softmax hidden units will be more relevant.\", \"Overall, the paper can be improved along two directions : making the probabilistic interpretation more clear (specifying the graphical model clearly, deriving update rules), and doing experiments with softmax hidden units (so that the only thing changing is the update rules, and not the model architecture).\", \"---------------\", \"Post-rebuttal\", \"Figure 1 is helpful in understanding the model. Thank you for adding that.\", \"The additional experiments are also appreciated. This seems to indicate that it's not just the architecture but the learning rules that make the BCPNN model work well.\", \"It would also be helpful to visualize the features learned by softmax units in RBMs and AEs and see if this results in a similar pattern of HCs encoding broad regions and MCs encoding variations within those regions.\", \"It seems that the RBMs and AEs were not trained using any sparsity penalty. For example, Page 11 of https://www.cs.toronto.edu/~hinton/absps/guideTR.pdf and https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf. Having a target sparsity can have a significant impact on the learned features. Higher sparsity makes the features look more like stroke like and localized, and less spread out all over the visual field (as they do in Fig 4, C and D).\", \"Based on the additional experiments, I will be increasing my score to 5. However, given that the main contribution of the paper is a comparative study, the paper can add value by doing a more thorough comparison against variants of AEs and RBMs that have otherwise similar properties (such as keeping the HC-MC (softmax) architecture and sparsity levels the same).\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An incremental contribution lacking convincing experiments\", \"review\": \"Summary:\\nThe Bayesian Confidence Propagating Neural Network has recently been extended to the case of unsupervised learning (Ravichandran et al., IJCNN, 2020). This paper compares this extension to restricted Boltzmann machines, autoencoders, and a biologically plausible model proposed by (Krotov & Hopfield, PNAS, 2019) on the MNIST dataset. For evaluation the authors consider the learned receptive fields and the classification performance of a linear classifier. The paper is very similar to (Ravichandran et al., IJCNN, 2020) but with an extended experimental section.\", \"positives\": [\"Biologically plausible methods for unsupervised learning are an interesting area of research.\", \"There has been relatively little research on structural plasticity.\"], \"concerns\": [\"The paper does not introduce anything new but merely compares existing methods.\", \"The comparison is not an extensive study, but limited to one dataset, MNIST, and few alternative proposals, of which only the KH model is deemed 'brain-like'.\", \"There seems to be something off with the experimental results. Krotov & Hopfield report a better test accuracy of 98.54% in their original paper in spite of using less hidden units.\", \"BCPNN's performance is mediocre. It is even outperformed by random shallow networks with fixed, localized, random & random Gabor filters in the hidden layer (Illing et al, Neural Networks, 2019)\", \"Lacking performance could be excused by greater biological plausibility as a neuroscientific contribution, which is however not the case here. As the authors themselves state, their model is 'abstract' (page 4) and not a neural implementation but merely 'uses implicit competition via a local softmax operation, the neural implementation of which would be lateral inhibition'.\"], \"minor_comments\": \"$\\\\pi(x_i)$ in Eq (6) is never introduced.\\n\\nThe line above Eq (11) should probably refer to (11) not (6).\\n\\nThere's a typo in the sentence after Eq (11).\\n\\nThe hybrid representation might be interesting. Is it any more biological than the well-known distributed and local representations?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper systematically investigates Bayesian Confidence Propagating Neural Networks (BCPNN) on learning unsupervised representations on MNIST dataset. It presents a comprehensive comparison of four different commonly used unsupervised methods.\", \"review\": \"Systematic investigation of biologically inspired algorithms and architectures is a very important research topic.\\n\\nThe paper investigates Bayesian Confidence Propagating Neural Networks (BCPNN) on learning unsupervised representations on MNIST dataset. It presents a comprehensive comparison of four different commonly used unsupervised methods. \\n\\nThe strong merit of BCPNN approach is the nice receptive fields of the hypercolumns (HC) and minicolumns (MC) learned by the proposed algorithm. As the authors point out, the advantage of the proposed algorithm is that it is able to produce sparse and highly localized (in the image plane) receptive fields for the MCs. Also, those filters look much cleaner than the counterparts of the classical algorithms considered in figure 3. Additionally, the authors demonstrate that their representations stand in line with previously published proposals in terms of the classification accuracy. \\n\\nThe main weakness of this work is that the proposed method has only been investigated on MNIST and in one-layer architectures. At the same time, given the novelty of the approach, I think it deserves attention even in this simplest setting considered in this manuscript.\", \"small_comments\": \"1. Section4 (3rd line) should be \\u201clearned\\u201d \\n2. There are some misprints around equation 11, such as the use of p(y) vs. P(y) is inconsistent. Also, it seem that the word \\u201clarge\\u201d is missing following the formula in the line after equation 11. \\n3. I also find panel B in figure 2 confusing. The way it is presented makes it look like the authors have combined the outputs of four networks to feed into the classifier, while in reality those four networks were evaluated one by one. \\n4. It would be better to designate a new variable for the left hand side of equation 12, since I_ij is already taken.\", \"post_rebuttal\": \"Thank you for the response. I have read the discussion with other reviewers. A small comment: while I agree that it is reasonable to keep the classifier the same for all the models (softmax with cross entropy) for a fair comparison, I disagree that the activation function for the first layer should be kept as ReLU in the KH model. In fact KH explain that this is a suboptimal choice in Fig. 4 of their original paper. Using powers of ReLUs should increase the KH accuracy. Overall, I think that this is a nice paper, and I am inclined to keep my initial score.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This is a comparative study of four unsupervised learning approaches. The authors specifically focused on the brain-like BCPNN model. Overall, the methods were clear and fair. However, it is difficult to draw reliable conclusions based on current comparison results.\", \"review\": \"This paper evaluated four unsupervised learning approaches (BCPNN, KH, RBM, AE) by training a supervised classification layer on top of the hidden representation. Specifically, the authors qualitatively compared the receptive fields and quantitatively compared the classification performance across four models. The authors emphasized the advantages of BCPNN since it applies biologically plausible local learning rules and requires fewer epochs for convergence.\\n\\nOverall, the comparison was fair and solid. The description of the BCPNN model in section 3 was clear and comprehensive. But the authors did not provide sufficient details of key mechanisms in the other three models (especially the KH model, which also used brain-like learning rules). The detailed introduction of the other three models should be an important component since this is a comparative study. The results were clearly stated, but the insignificant difference in the classification accuracy comparison (Table 2) can hardly lead to a reliable conclusion about which unsupervised method is better. And it would be better if the authors could provide more interpretations about the \\\"hybrid\\\" receptive fields of HCs and MCs in BCPNN (Fig. 3A).\\n\\n##### Specific comments and questions:\\n1. In the original KH model (Krovtov & Hopfield, 2019), they also tested the classification accuracy on the MNIST dataset, and the result reached an error rate of 1.46% with 2000 hidden states. This is better than the reproduced result shown here (97.39% accuracy) with 3000 hidden states. Is this accuracy drop caused by a different setting of hyperparameters? Then is it fair to say that \\\"BCPNN outperforms KH\\\"?\\n2. Could you provide more explanation on Eq. 11? Why this dynamic update of $k_{\\\\beta}$ could be used as a desired bias regulation?\\n3. When the number of total hidden units is fixed, what would be the effect of changing the ratio $\\\\frac{N_{MC}}{N_{HC}}$ in BCPNN?\\n4. The \\\"hybrid\\\" structure of BCPNN provides interesting receptive field results in Fig. 3A. Is this structure generalizable to a model with multiple hidden layers?\\n\\n##### Minor:\\nPage 4. in 3.1 Bias regulation. Typo: Eq. 6 should be Eq. 11. And \\\"the value of gain $k_{\\\\beta}$ at around 1 when $P(y_j) - \\\\frac{p_{MaxEnt}}{4}$,\\\" missing $\\\\gg 0$ here?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
k2Hm5Szfl5Z | A new framework for tensor PCA based on trace invariants | [
"Mohamed Ouerfelli",
"mohamed Tamaazousti",
"Vincent Rivasseau"
] | We consider the Principal Component Analysis (PCA) problem for tensors $T \in (\mathbb{R}^n)^{\otimes k}$ of large dimension $n$ and of arbitrary order $k\geq 3$. It consists in recovering a spike $v_0^{\otimes k}$ (related to a signal vector $v_0 \in \mathbb{R}^n$) corrupted by a Gaussian noise tensor $Z \in (\mathbb{R}^n)^{\otimes k}$ such that $T=\beta v_0^{\otimes k} + Z$ where $\beta$ is the signal-to-noise ratio. In this paper, we propose a new framework based on tools developed by the theoretical physics community to address this important problem. They consist in trace invariants of tensors built by judicious contractions (extension of matrix product) of the indices of the tensor $T$. Inspired by these tools, we introduce a new process that builds for each invariant a matrix whose top eigenvector is correlated to the signal for $\beta$ sufficiently large. Then, we give examples of classes of invariants for which we demonstrate that this correlation happens above the best algorithmic threshold ($\beta\geq n^{k/4}$) known so far. This method has many algorithmic advantages: (i) it provides a detection algorithm linear in time and that has only $O(1)$ memory requirements (ii) the algorithms are very suitable for parallel architectures and have a lot of potential of optimization given the simplicity of the mathematical tools involved (iii) experimental results show an improvement of the state of the art for the symmetric tensor PCA. Furthermore, this framework allows more general applications by being able to theoretically study the recovery of a spike in the form of $v_1 \otimes \dots \otimes v_k$ with different dimensions ($T \in \mathbb{R}^{n_1\times n_2\times \dots \times n_k}$ with $n_1,\dots, n_k \in \mathbb{N}$) as well as the recovery of a sum of different orthogonal spikes. We provide experimental results to these different cases that match well with our theoretical findings. | [
"Tensor",
"Principal Component Analysis",
"Tensor decomposition",
"trace invariant"
] | Reject | https://openreview.net/pdf?id=k2Hm5Szfl5Z | https://openreview.net/forum?id=k2Hm5Szfl5Z | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"6e2xsLRPt5-",
"x7qLBJhL7VD",
"zlSqZaj0Irw",
"EyrylnDj4To",
"bRqAc8z_ky4",
"zhKZTsZCE2Z",
"oMQmeSamrxp",
"ARRUyGmhYcy",
"pRAsk8Oa1Tk",
"NvIwXUMyrB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040353467,
1606219448622,
1606148794895,
1605680369054,
1605680271190,
1605679685801,
1605678920419,
1604116142051,
1603846712873,
1603841043079
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3725/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3725/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3725/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3725/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3725/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3725/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3725/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3725/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3725/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper studies the tensor principal component analysis problem, where we observe a tensor T = \\\\beta v^{\\\\otimes k} + Z where v is a spike and Z is a Gaussian noise tensor. The goal is to recover an accurate estimate to the spike for as small a signal-to-noise ratio \\\\beta as possible. There has been considerable interest in this problem, mainly coming from the statistics and theoretical computer science communities, and the best known algorithms succeed when \\\\beta \\\\geq n^{k/4} where n is the dimension of v. The main contribution of this paper is to leverage ideas from theoretical physics and build a matrix whose top eigenvector is correlated with v for sufficiently large \\\\beta using trace invariants. On synthetic data, the algorithms achieve better performance than existing methods.\\n\\nThe main negative of this paper is that it is not so clear how tensor PCA is relevant in machine learning applications. The authors gave some references to applications of tensor methods, but I want to point out that all of those works are about using tensor decompositions, which despite the fact that they are both about tensors, are rather different sorts of tools. Many of the reviewers also found the paper difficult to follow. I do think exposition is particularly challenging when making connections between different communities, as this work needs to introduce several notions from theoretical physics. I am also not sure how novel the methods are, since a somewhat recent paper Moitra and Wein, \\\"Spectral Methods from Tensor Networks\\\", STOC 2019 also uses tensor networks to build large matrices whose top eigenvalue is correlated with a planted signal, albeit for a different problem called orbit retrieval.\"}",
"{\"title\": \"General comment\", \"comment\": [\"We would thank all reviewers for the valuable comments and constructive feedback which help us to significantly improve the quality of the presentation of this work.\", \"We have uploaded a revisited version, in order to take into account the reviewer's comments and to clarify the notations and experimental details. More specifically, our main changes in the revision include:\", \"We paid a careful attention to clearly define, in an explicit way, all the introduced notations. We also add figures to illustrate the important ones. These definitions include:\", \"A formal definition of the contraction of indices (Section 2.1, end of page 3)\", \"A formal definition of tensor invariants (Section 2.1, end of page 3)\", \"A formal definition of the orthogonal group (Section 2.1, second paragraph) with a warning to not confuse it with the computational complexity (the context should make the distinction simple).\", \"A formal correspondence between the graphs and the trace invariants and how to obtain one from the other (section 2.1, beginning of the page 4).\", \"A formal definition of the matrix associated to a trace invariant and an edge (section 2.2). We also added an illustration to make the construction easier to comprehend (Figure 2).\", \"We defined the variance of the graph (Appendix C) and how to compute it.\", \"A formal definition of the intermediate graphs in the beginning of the appendix E.\", \"A clearer appendix on the perfect one-factorization (appendix C)\", \"We added details (number of independent experiments, number of power iterations, the time and memory requirements of each method, etc.) to the numerical simulations in the main text and in the appendix D.\", \"We added details about the complexity and the parallelization (appendix D section D.3)\", \"We put a complete version of the proofs and added details and graphs to make them more easily readable.\", \"We give more references concerning the practical applications of tensor PCA and tensor decomposition (multiple spikes recovery).\", \"We added more extended descriptions in the last sections of the main text for a smoother reading.\"]}",
"{\"title\": \"Thank you very much for your feedback [Part 3]\", \"comment\": \"* Let's take for example the algorithm associated to the melon diagram. It consists in i) calculating the n-dimensional matrix $M_{{i_1},{i_2}}=\\\\sum_{jk} T_{i_1 jk} T_{i_2 jk}$ and then ii) find its leading eigenvector. Since the matrix is n-dimensional, the time complexity of the second step will be negligible (for example just reading the tensor has a complexity of $n^3$ because that is its size.)\\n For the first step, which we would want to focus our optimization, we can calculate each element of $M$ independently of the others. And in the end we put them all together in the matrix. That is why we stated that it is suitable for parallel structure. We run some of our experiments on a cluster where we used without any issue parallelization to fasten our calculations.\\n We added a small comment about this in the appendix D about the speed of the experiments in our updated version.\\n \\n* We attempted to add several details and figures for the proofs, since such combinatorial proofs can sometimes be hard to follow.\"}",
"{\"title\": \"Thank you very much for your feedback [Part 2]\", \"comment\": [\"We added more graphs with the hope it will become clearer and put more explicitely the correspondance in Section 2.1 end of page 3/ beginning of page 4. We have stated in the updated version: \\\"A trace invariant of degree $d$ of a tensor $\\\\mathbf{T}$ of order $k$ admits a practical graphical representation as an edge colored graph $\\\\mathcal{G}$ obtained by following two steps: we first draw $d$ vertices representing the $d$ different copies of $\\\\mathbf{T}$. The indices of each copy is represented by $k$ half-edges with a different color for each index position as shown in Figure 1.a. Then, when two different indices are contracted in the tensor invariant, we connect their corresponding half-edges in $\\\\mathcal{G}$. Reciprocally, to obtain the tensor invariant associated to a graph $\\\\mathcal{G}$ with $d$ vertices, we take $d$ copies of $\\\\mathbf{T}$ (one for each vertex), we associate a color for each index position, and we contract the indices of the $d$ copies of $\\\\mathbf{T}$ following the coloring of the edges connecting the vertices. We denote this invariant $I_\\\\mathcal{G}(\\\\mathbf{T})$ .\\\"\", \"So if, for instance, we take the invariant $\\\\sum_{ijk} T_{ijk} T_{ijk}$. i) We have a product of two copies of the tensor $T$, so we draw two vertices. ii) Then since the tensor is of order 3, it has three indices. So, for each vertex we will draw three half edges with different colors (each color will represent an index position: 1st, 2nd or 3rd) to represent the three indices.\", \"iii) For each contraction of a pair of indices in the invariant expression, we will connect the half edges corresponding to these indices. For example the invariant $\\\\sum_{ijk} T_{ijk} T_{ijk}$. contracts the first index of the first copy of $T$ with the first index of the second copy of $T$, the second with the second, and the third with the third. Thus we obtained the melon diagram.\", \"For the other way around, if we have a graph and we want to find the invariant associated. We search for the degree of the graph (the number of vertices). It will give us the number of copies of the tensor $T$. Then we count for each vertex how many half edges it has, this will give us the order of the tensor (How many indices it has). Then we assign a color to each index position. In the end, for each edge connecting two vertices, we contract the indices associated to the two half edges of this edge.\", \"We made more explicit the definition of the matrix $M_{\\\\mathcal{G},e}$ and we added the new figure 2 to try to provide a better explanation to how we obtain the matrix from a graph $\\\\mathcal{G}$ and an edge $e$, which is a crucial part of our work.\", \"If we take the precedent example of the melon, that we consider our graph $\\\\mathcal{G}$, corresponding to the invariant $\\\\sum_{ijk} T_{ijk} T_{ijk}$. Cutting an edge will be equivalent to removing a contraction. Let's say we cut the edge of the color associated to the first position. It means that we are no longer setting the two first indices equal and summing over them. We will have $\\\\sum_{jk} T_{i_1 jk} T_{i_2 jk}$. We have two free indices $i_1$ and $i_2$, so we can define a matrix $M_{{i_1},{i_2}}=\\\\sum_{jk} T_{i_1 jk} T_{i_2 jk}$.\", \"$I_G(T)$ is the tensor invariant associated to the graph $G$ calculated for the tensor $T$. We added a more explicit introduction of $I_G(T)$ in the updated version.\", \"The Loss funtion was $1-<v,v_0>$ where $<,>$ is the scalar product, $v$ is the vector output by the algorithm and $v_0$ the signal vector we aim to recover. We removed completely this notation in the updated version since it was not essential.\", \"By dominating we meant that its operator norm was much larger than the others, we removed this notation and put a more explicit formula in the section 2.3 of the updated version.\", \"The polynomial complexity is inherent to the definition of the trace invariants and the matrices associated to them. A trace invariant is a sum of product of tensor elements (like $\\\\sum_{ijk} T_{ijk} T_{kij}$. So it is by definition polynomial. The input is a tensor of size $n^3$, and the sum in the melonic invariant $\\\\sum_{ijk} T_{ijk} T_{ijk}$ is over three indices varying from 1 to n. So it makes $O(n^3)$ operations. Thus the time linearity. The linearity of the recovery algorithm was proven in a previous reference that we precised. We added a precision about the time linearity of the two algorithms for the melon diagram in the updated version.\"]}",
"{\"title\": \"Thank you very much for the feedback! [Part 1]\", \"comment\": \"The theoretical physics community has developed in the last decade new tools to study problems involving tensors [1]. This paper aimed to show that these tools are easily adaptable and are of great interest in tackling concrete machine learning problems. Indeed, we showed in this paper that existent well-studied tools (trace invariant) and developed new ones (matrices associated to the trace invariants) are able to tackle the important problem of tensor PCA and proven that they performed better than state-of-the-art in the usual important settings and were even able to tackle more general settings closer to real life applications like asymmetric tensorial data (video, image with color, etc.). However, one well-known difficulty in approaching such interdisciplinary subjects is the dictionary of vocabulary between communities, and we think it may have been an important source of confusion. That is why we attempted in the updated version to completely revise the notations throughout the paper. We want to reiterate our appreciations for the reviewers and their very helpful comments and feedback to help us improve the clarity of the paper.\\n\\n\\n* We added in the new version, at the end of Section 2.1, a more precise definition of the contraction: Let's define a contraction of a pair of indices as setting them equal to each other and summing over them, as in calculating the trace of a matrix ( $\\\\mathbf{A}_{ij} \\\\rightarrow \\\\sum_{i=1}^n \\\\mathbf{A}_{ii}$ )\\n \\n* O(n) refers here to the orthogonal group, we added a more precise definition, at the second paragraph of the section 2.1, in the updated version: $\\\\mathrm{O}(n)$ is the $n$-dimensional orthogonal group (i.e. the group of real matrices that satisfies $\\\\mathbf{O} \\\\mathbf{O}^\\\\top=\\\\mathbf{I}_n$). \\n $\\\\mathbf{O}$ refers to an orthogonal matrix.\\n We used $\\\\mathcal{O}()$ to refer to the computational complexity.\\n Unfortunately these different objects with similar symbols added a lot of confusion.\\n So in the updated version, we used $\\\\mathrm{O}(n)$ for the orthogonal group (and we added its definition) and $O(n)$ for the complexity (to match the custom of the community), we also hope that the context helps in distinguishing the two objects.\\n \\n[1] R. Gurau, \\u201cUniversality for Random Tensors,\\u201dAnn. Inst. H. PoincareProbab. Statist., vol. 50, no. 4, pp. 1474\\u20131525, 2014.\"}",
"{\"title\": \"Thank you very much for the feedback!\", \"comment\": [\"We want to thank the reviewer for his valuable feedback and comments. We reply to each of the reviewer's questions in the original order below:\", \"We incorporated various clarifications and definitions in the new version, like $\\\\langle . \\\\rangle$ which corresponds to a scalar product. We also tried to explain carefully all the notations introduced by the theoretical physics community (and non standard in machine learning) in the beginning of the paper.\", \"It is a typo, we addressed it in the new version.\", \"The variance of a graph is the variance of its associated trace invariant. It is given by $E((I_\\\\mathcal{G} - E(I_\\\\mathcal{G}))^2)$. We added many details and many graph illustrations in the demonstrations in order to clarify them (in particular the theorem $4$). We hope that makes the demonstrations more easy to read.\", \"We added clarifications in the new version and changed notations that may have been confusing in the algorithm 1: i) We first calculate theoretically the expectation and the variance of the invariant $I$ for a random gaussian model (we denote them $E(I^{(N)})$ and $\\\\sigma(I^{(N)})$), they depend only on $n$. ii) Then we compute the value (that we denote $\\\\alpha$) of this invariant for our tensor T (from which we want to detect the presence of a signal). iii) Chebyshev's theorem provides the probability that $T$ is a random tensor based on the distance of the value $\\\\alpha$ to the mean (thus the comparison with the variance). The variance of the noise model is not important since we can factor it out from the tensor: if $Z'$ is a random gaussian tensor with variance $\\\\sigma$, first we can find $\\\\sigma$ by plotting the distribution of the components of $Z'$, then we can introduce $Z$ such that $Z'=\\\\sigma Z $ . Thus, $Z$ would be a tensor whose components follow a standard normal distribution. So changing the variance of the model just adds a constant factor ($1/\\\\sigma$) to the detection and recovery threshold ($T=\\\\beta v^{\\\\otimes k} + Z'=\\\\beta v^{\\\\otimes k} + \\\\sigma Z= \\\\sigma (\\\\beta/\\\\sigma+Z)$.\", \"This was a typo, we addressed this issue in the new version of the paper.\", \"We added clarification to the theorem 2. Since the model is inherently probabilistic, it is mathematically impossible to have a detection or recovery with probability strictly equal to 1. The common procedure is to prove the theorems at the large $n$ limit ([1] and the other papers used for their proofs random matrix theory at large $n$) and to use the empirical results to check when this approximation of the large $n$ is valid ([1] noted that empirically it was valid above $n=25$, which is also what we observe with our experiments). We clarify this important point in the paper and thank the reviewer for bringing it to our attention.\", \"We introduced more carefully what we call the intermediate graphs in the new version. They are the graphs which has both a contribution from the noise random tensor **and** from the signal vector. Note that, the two other kinds of graphs are the pure noise graph and the pure signal graph. We also added more clarifications and an illustration to the appendix C discussing perfect one-factorization.\", \"We agree that the summands may have been confusing because we wrote some coefficients implicitly in the '$\\\\dots$' while we kept some others. In order to clarify it, we make all the coefficients implicit and we add the decomposition of the tetrahedral matrix by drawing its 16 contributions. The reviewer is perfectly right that what is mainly used in the proofs is the coefficient $\\\\beta^d$ next to $v^{\\\\otimes(k)}$.\", \"We added details about the experiments in the appendix D. For instance, concerning the recovery methods, we repeated in 50 independent instances the following settings: i) We generate randomly the n components of the signal vector $v_0$ and then normalize it.\", \"ii) We generate randomly the $n^3$ components of the random tensor $Z$. If we are in the symmetric case, we symmetrize it with the same normalization than [1].\", \"iii) We compute the tensor $T=Z+\\\\beta v_0^{\\\\otimes 3}$.\", \"iv) We compute the matrix constructed from contracting multiple copies of $T$ (for example associated to the melon: $M_{i_1 i_2} = \\\\sum_{j,k} T_{i_1 jk} T_{i_2 jk}$) as described in Figure 2. To compute it, we use the numpy tensordot function in Python. v) We find its respective leading eigenvector $v$.\", \"vi) We draw the correlation between the obtained vector $v$ with the initial signal vector $v_0$.\", \"[1] E. Richard and A. Montanari, \\u201cA statistical model for tensor pca,\\u201d inAd-vances in Neural Information Processing Systems, pp. 2897\\u20132905, 2014.\"]}",
"{\"title\": \"Thank you very much for the feedback!\", \"comment\": \"We want to thank the reviewer for his valuable feedback and comments.\", \"we_answer_to_the_reviewer_concerns_below\": \"* **Applications for ML/AI/Language processing:**\\nTensor PCA and tensor decomposition (the recovery of many spikes addressed in Section 3.5) is motivated by the increasing number of problems in which it is crucial to exploit the tensorial structure [1]. Recently it was successfully used to address important problems in unsupervised learning (learning latent variable models, in particular latent Dirichlet allocation [2], [3]), supervised learning (training of two-layer neural networks, \\\\cite{janzamin2015beating}) and reinforcement learning ([4]). Moreover, we note that some of our results tends to generalize the applications of the methods to more practical settings like the case of a tensor with axes of different dimensions (adequate for data which are inherently asymmetric like a video). We added these elements in the introduction just before the related work paragraph.\\n\\n \\n* **Experiments on real data and detailed comparison of the methods:**\\nWe agree with the reviewer that experiments on real data would have been very interesting. However, this paper has a more theoretical leaning and primarily aims to introduce a new framework where we derive new algorithmic results.\\nFor a fair comparison to other existent methods, we favored synthetic data. We added the applications on real data as potential perspective. \\n \\n\\n\\n[1] N. D. Sidiropoulos, L. De Lathauwer, X. Fu, K. Huang, E. E. Papalexakis,and C. Faloutsos, \\u201cTensor decomposition for signal processing and machinelearning,\\u201dIEEE Transactions on Signal Processing, vol. 65, no. 13, pp. 3551\\u20133582, 2017.\\n\\n[2] A. Anandkumar, R. Ge, D. Hsu, S. M. Kakade, and M. Telgarsky, \\u201cTensordecompositions for learning latent variable models,\\u201dJournal of MachineLearning Research, vol. 15, pp. 2773\\u20132832, 2014.\\n\\n[3] A. Anandkumar, D. P. Foster, D. Hsu, S. M. Kakade, and Y.-K. Liu, \\u201cA spec-tral algorithm for latent dirichlet allocation,\\u201dAlgorithmica, vol. 72, no. 1,pp. 193\\u2013214, 2015.\\n\\n[4] M. Janzamin, H. Sedghi, and A. Anandkumar, \\u201cBeating the perils of non-convexity: Guaranteed training of neural networks using tensor methods,\\u201darXiv preprint arXiv:1506.08473, 2015.\\n\\n[5] K. Azizzadenesheli, A. Lazaric, and A. Anandkumar, \\u201cReinforcement learn-ing of pomdps using spectral methods,\\u201darXiv preprint arXiv:1602.07764,2016.\"}",
"{\"title\": \"A review of the paper \\\"A new framework for tensor PCA based on trace invariants\\\"\", \"review\": \"Summary:\\n \\nThe paper provides an interesting algorithm for tensor PCA, which is based on trace invariants. The problem consists of recovering a (single-spike/multiple orthogonal spikes) tensor corrupted by a Gaussian noise tensor. The authors proposed a new algorithm which allows recovering a signal for a sufficiently small signal to noise ratio. \\n\\n##########################################################################\", \"reasons_for_score\": \"Overall, I vote for accepting. I like the idea, and the proofs seem to be coherent and correct. The problem has clear importance for the theoretical/statistical physics community; however, I am not convinced of the importance of the problem considered here for the ICLR community and appreciate the author\\u2019s comments on this. I also have a few minor concerns, which, hopefully, can be addressed by the authors in the rebuttal period. \\n \\n##########################################################################\", \"pros\": \"1. The paper takes an interesting question about tensor PCA and proposes a promising approach to solve it based on the trace invariants. For me, the problem is encouraging, while I would appreciate a discussion about possible machine learning/AI applications (learning latent variable models? anything else?)\\n \\n2. The mathematical justification of the statements seems to be correct for me and ok to follow. \\n\\n3. It is claimed in the paper that the algorithm improves the state-of-the-art (signal to noise ratio requirements) in several cases, while a brief survey/table of the recent results is missing. Unfortunately, I am not working in this area and probably not familiar with recent results\\n \\n##########################################################################\", \"cons\": \"1. Applications for ML/AI/Language processing are not very clear for me, and I would appreciate a discussion on this in the paper. \\n\\n2. Empirical justification. I would highly appreciate having more experiments on real data (if any) and a detailed comparison of the methods in terms of accuracy/memory/time.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Review of \\\"A new framework for tensor PCA based on trace invariants\\\"\", \"review\": \"The paper presents a pair of interesting algorithms using trace invariants to detect the signal in the signal-plus-noise tensor PCA framework. The algorithms function by considering cutting an edge in the graph representation of the trace invariant, yielding a matrix whose leading eigenvector provides a (up to a rotation) estimate of the signal vector $v$. This algorithm appears to be very interesting and works well in a series of simulations.\\n\\nUnfortunately, the presentation of the paper makes it very difficult to assess the importance of the contribution. The introduction is well-written and well-motivated, though the later segmentation of the paper into many small subsections without much exposition makes the flow of the paper and its results hard to follow. In addition, the notation and terminology in the paper are imprecise and, with important terminology and symbology introduced without definition or background citation.\", \"pros\": [\"The proposed algorithm is clever and appears to do well compared to existing approaches in experiments.\", \"Well written introduction (with the only complaint being some minor grammatical errors).\"], \"cons\": [\"Important notation is introduced, and is not defined; Equation 4 is an example of this, where $\\\\langle \\\\cdot \\\\rangle$ (I assume this means $\\\\mathbb{E}$?), $\\\\bar{\\\\mathbf{T}}$, and $\\\\mathcal{E}^0(\\\\mathcal{G})$ are all undefined. This occurs often in the paper and in the appendix.\", \"In the $\\\\bullet, \\\\times, \\\\bullet$ decomposition at the start of Section 2.3, what is $\\\\sqrt{N}$?\", \"What is the variance of a graph (as in Theorem 4)? The proof sketch of this theorem is very hard to follow.\", \"Algorithm 1 is imprecise; what does \\\"compare $\\\\alpha$ to $\\\\sigma(I^{(N)}(T))$ mean? If $\\\\alpha>\\\\sigma(I^{(N)}(T))$ then a spike is detected? How do you compute the variance of $I^{(N)}(T)$? How would you compute this if the noise model did not have unit variance)?\", \"Both algorithms are only presented for 3-way tensors, but the Theoretical claims are for higher order tensors?\", \"The proofs of the theorems and the statement of the theorems are, in general, a bit imprecise. For example, in the proof of Theorem 2, Chebyshev's inequality will not guarantee disjointness everywhere, but only with high probability. This is the case if $\\\\beta_{det}$ is finite. This is a finite $\\\\beta_{det}$ result, with a claim only holding in the limit.\", \"In Theorem 5, what are the intermediate graphs/matrices? In addition, this section (and Appendix C discussing perfect one-factorization) are a bit opaque.\", \"Is the decomposition after equation 5 only for the melon graph? For more complex graphs (i.e., the tetrahedral), I believe you will have additional trace-like coefficients on all terms. In any event, I am confused about the summands. I do not see why the all $Z$ sum would have a $\\\\beta$, while the cross-terms would not. Furthermore, why would the all $v$ sum not have a $\\\\beta^d$ coefficient? This is what is implicitly being used in the proofs?\", \"In the experiments, important details are left out. What is the setup here: what are the $v$'s, how many iterations of tensor power method are applied, how many MC replicates are run to produce the error bars, what is the y-axis, what are the runtimes here, what is Random in Figure 6? More detail would help a lot to understand how your new approach compares (it appears well) with the current literature.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"Summary:\\n\\nThis paper studies the detection and recovery problem in spiked tensor models in the form T = \\\\beta v0^\\\\otimes k + Z, where v0 is the underlying spike signal and Z is a Gaussian noise. The authors claim that they propose a new framework to solve the problem, by looking at the trace invariants of tensors. The authors provide a detection algorithm (Algorithm 1) and a recovery algorithm (Algorithm 2), as well as the corresponding phases. The authors claim that: 1) they \\\"build tractable algorithms with polynomial complexity\\\", \\\"a detection algorithm linear in time\\\"; 2) the algorithms are very suitable for parallel architectures; 3) an improvement of the state of the art for the symmetric tensor PCA experimentally. The authors furthermore discuss the asymmetric case and the multiple spike case.\", \"recommendation\": \"At the current stage I vote for rejection. I am not able to follow the proofs in this paper due to missing definitions of terms and notations. Also some claims are not proved. See below for details.\", \"pros\": [\"The methods used in the paper seem new for spiked tensor models.\", \"Some experimental results are provided.\"], \"cons\": [\"The readability of this paper severely suffers from its writing. At the current stage, filled with undefined or inconsistent notations and terms, this paper is not self-contained and hard to follow. This becomes worse considering the fact that this paper studies tensor problems -- many tensor-related terms have multiple definitions (e.g., eigenvalues, ranks). It will be very hard to follow the proofs if the definitions are unclear. Here is an incomprehensive list:\", \"Middle of Page 3: what is the *formal* definition of contracting (instead of saying \\\"equivalent to a matrix multiplication\\\")? Also, trace invariants are never formally defined in this paper.\", \"eq.(2),(3): what is O(n) here? Also, what does the bold O refer to? Right before eq.(4) the authors use another notation \\\\mathcal{O}(n). Is this the same as the first O(n)? In the abstract the authors use \\\\mathcal{O}(1) to refer to the constant order. Why the inconsistency?\", \"End of Page 3: how is \\\\mathcal{G} related to trace invariants formally?\", \"Section 2.2: this is not clear. What are the matrices here? What is the definition of M_{G,e}?\", \"Section 2.3: what is the definition of I_G(T)?\", \"Theorem 3: what is the Loss function here?\", \"Top of Page 5: what is the exact definition of \\\"dominating\\\" here?\", \"It should be noted that, without clear definitions of I_G(T) and M_{G,e}(T), there is no way to verify Algorithm 1 and 2.\", \"The authors claim \\\"polynomial complexity\\\" at the beginning of the paper, but it is never proved. Theorem 7 claims that Algorithm 1 and 2 run in linear time. I cannot find that in the proof.\", \"It is unclear why the algorithms \\\"are very suitable for parallel architectures\\\", as the authors have claimed. Have the authors tried running the experiments in parallel?\", \"Theorem 4, 5, 9, 10 do not have complete proofs.\"], \"minor_comments\": [\"Page 2 Notations: typeface of v is not consistent.\", \"Page 8: \\\"eg\\\" should be \\\"e.g.\\\"\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
bIrL42I_NF8 | On the Effect of Consensus in Decentralized Deep Learning | [
"Tao Lin",
"Lingjing Kong",
"Anastasia Koloskova",
"Martin Jaggi",
"Sebastian U Stich"
] | Decentralized training of deep learning models enables on-device learning over networks, as well as efficient scaling to large compute clusters. Experiments in earlier works revealed that decentralized training often suffers from generalization issues: the performance of models trained in a decentralized fashion is in general worse than the performance of models trained in a centralized fashion, and this generalization gap is impacted by parameters such as network size, communication topology, and data partitioning.
We identify the changing consensus distance between devices as a key parameter to explain the gap between centralized and decentralized training. We show that when the consensus distance does not grow too large, the performance of centralized training can be reached and sometimes surpassed. We highlight the intimate interplay between network topology and learning rate at the different training phases and discuss the implications for communication efficient training schemes. Our insights into the generalization gap in decentralized deep learning allow the principled design of better training schemes that mitigate these effects.
| [
"decentralized deep learning",
"performance",
"effect",
"consensus",
"training",
"models",
"generalization gap",
"consensus distance",
"deep learning models",
"learning"
] | Reject | https://openreview.net/pdf?id=bIrL42I_NF8 | https://openreview.net/forum?id=bIrL42I_NF8 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"b42d8XgbIj",
"KpAyOWOrhs",
"9mNRx3bS6mB",
"9IHOhRfowk-",
"1bRGsE4eDOw",
"1MPrFYNWfDl",
"ed6zLOC89vs",
"R4dEk-WDEZM",
"9-xu4M2Atva",
"77U6-cjQ-Z",
"4O-8uRk2zaZ",
"_NdYak17A7a"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040476101,
1605989093628,
1605749117300,
1605749048872,
1605748594726,
1605748533432,
1605748373366,
1605748118606,
1604270438378,
1603981721186,
1603776206949,
1603564705073
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3721/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3721/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3721/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3721/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3721/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3721/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3721/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3721/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3721/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3721/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3721/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The authors study the problem of (insufficient) generalization in gossip-type decentralized deep learning. Specifically, they establish an upper bound on the square of the consensus parameter distance, which the authors identify as a key quantity that influences both optimization and generalization. This upper bound (called the critical consensus distance) can be monitored and controlled during the training process via (e.g.) learning rate scheduling and tweaking the amount of gossip. A series of empirical results on decentralized image classification and neural machine translation are presented in support of this observation.\\n\\nInitial reviews were mixed. While all reviewers liked the approach, concerns were raised about the novelty of the results, the lack of theoretical depth, and the mismatch between theory and experiments. Overall, the idea of tracking consensus distance to control generalization seems to be a practically useful concept. During the discussion phase the authors were been able to (convincingly, in the area chair's view) respond to a subset of the criticisms. \\n\\nUnfortunately, concerns remained regarding the mismatch between the theoretical and empirical results, and in the end the paper fell just short of making the cut. \\n\\nThe authors are encouraged to carefully consider the reviewers' concerns while preparing a future revision.\"}",
"{\"title\": \"Acknowledging responses\", \"comment\": \"Thanks for your responses (specific to my questions and comments, as well as the general response). I will take this all into account. In my view the main limitation is the restriction to smaller tasks (CIFAR10, low-res ImageNet) with small networks. In my own experience, conclusions drawn on these smaller tasks often fail to hold or generalize to larger-scale tasks. However, the paper contains enough other innovations and insights, and the experimental methodology is otherwise rigorous. For now I am not inclined to change my decision (up or down). The paper is worthy of acceptance at ICLR.\"}",
"{\"title\": \"Response to R2, part 2/2\", \"comment\": \"### Observations on cosine learning rate schedule (Question 6)\\nWe include results of consensus distance control with half cosine learning schedule on ring topology (this scheme is visited in [3] as a new paradigm for CNN training).\", \"we_can_observe_that_the_effect_of_critical_consensus_distance_can_be_generalized_to_this_learning_rate_schedule\": \"there exists a critical consensus distance in the initial training phase (please refer to the inline Figure of Table 14 in the revised paper) ensures good optimization and generalization.\\n\\n| $\\\\Xi_{max}$ \\t| $1/2 \\\\Xi_{max}$ \\t| $1/4 \\\\Xi_{max}$ \\t| $1/8 \\\\Xi_{max}$ \\t| Complete \\t|\\n|------------------\\t|------------------\\t|------------------\\t|------------------\\t|------------------\\t|\\n| $92.10 \\\\pm 0.06\\\\quad$ \\t| $92.40 \\\\pm 0.10\\\\quad$ \\t| $92.83 \\\\pm 0.11\\\\quad$ \\t| $92.78 \\\\pm 0.05\\\\quad$ \\t| $92.84 \\\\pm 0.22$ \\t|\\n\\n[Comments on developing practical algorithms (Question 7)]\", \"our_work_aims_to_better_understand_the_importance_of_consensus_distance_on_different_phases_of_deep_learning_training\": \"performing multiple gossip steps serves as a way of controlling consensus distance for the understanding purpose, rather than as an efficient solution in practice.\\n\\nWe believe our insights can be utilized for efficient and effective algorithm design in decentralized deep learning, e.g. the communication topology design balancing the trade-off between communication efficiency and the spectral gap. For example, most prior works in communication-efficient topology design generally focus on improving the spectral gap of the topology (e.g. random matching idea in [1, 2]), as motivated by standard convergence analysis. Our work identifies the existence of the critical consensus distance and thus relaxes the requirement on the spectral gap: this insight provides more flexibility to design efficient and effective decentralized deep learning algorithms.\\n\\nWe thank the reviewer for pointing out Tsianos and Rabbat (2014), the paper uses multiple gossip steps on dual averaging methods (convex problems). We will include a discussion in the next revision. \\n\\n### Reference\\n1. MATCHA: Speeding up Decentralized SGD via Matching Decomposition Sampling.\\n2. SwarmSGD: Scalable Decentralized SGD with Local Updates\\n3. Bag of Tricks for Image Classification with Convolutional Neural Networks.\"}",
"{\"title\": \"Response to R2, part 1/2\", \"comment\": \"We thank the reviewer for the time and valuable feedback. We will add corrections/clarifications as suggested. Please find answers to specific comments below:\\n\\n### Connection with gradient diversity (Question 1)\\nThe connections between the consensus distance and gradient diversity measure are not obvious and is an interesting direction for future works. On the one hand, decentralized methods could suffer from similar problems as centralized ones if gradients are not diverse enough. On the other hand, it is harder to achieve consensus (some constant accuracy epsilon) on the diverse vectors (gradients) rather than on similar ones. \\n\\n### Connection with other methods like SWA/SWAP (Question 1)\\nOur empirical results share a similar insight as in SWA, SWAP, and Post-local SGD, but none of them consider decentralized learning.\\n\\nSWA is a method where models are sampled from the later stages of an SGD training run; when the weights of these models are averaged, they result in a model with much better generalization properties.\", \"swap_extends_swa_in_a_parallel_fashion\": \"it uses a large batch size to train the model close to convergence and then switches to several individual runs with a small mini-batch size. These individual runs serve as a way of sampling from a posterior distribution and can be averaged for better generalization performance (i.e. the idea of SWA).\\n\\nPost-local SGD, SWA, SWAP, as well as the empirical insights presented in our paper, are closely related: we first need sufficient small consensus distance to guarantee the optimization quality (in post-local SGD, SWA, and SWAP, the consensus distance equals 0) and thus different model averaging choices can be utilized in the later training phase for better generalization. Considering the later training phase, our empirical observations in decentralized learning suggest that we can improve the generalization through the simultaneous SGD with gossip averaging. This is analogous but different from SWA and SWAP that sample model independently (i.e., perform SGD) from the well-trained model and average over sampled models; and similar to Post-local SGD which performs simultaneous SGD steps with infrequent averaging.\\n\\n### Experiments on standard ImageNet (Question 2 & 3)\\nWe conducted experiments on downsampled ImageNet (image resolution 32) with ResNet-20-3 (width factor 3) in Table 3; it has already reached the limit of our computational resources.\\n\\nOur insights can be generalized to standard ImageNet training or other challenge tasks, as supported by the existing papers. For example, post-local SGD paper shows on standard ImageNet of the effectiveness of performing local SGD on later training phase (our takeaway message 2: a non-negligible consensus distance at middle phase can improve generalization), and Assran et al. 2019 preliminarily present in their Table 3 the result for standard ImageNet, which is also consistent with our insights (our takeaway message 1: critical consensus distance exists in the initial training phase ensures good optimization and generalization).\\n\\n### Insights for prolonged training on other phases (Question 4)\\nWe prolong the training for dec-phase-2 and dec-phase-3 for node n =32. All experiments are performed over two seeds. We can observe although longer training duration increases the performance, the improvement is rather small.\\n\\n| \\t| \\t| $75$ epochs \\t| $100$ epochs \\t| $125$ epochs \\t|\\n|-------------\\t|-----------------\\t|-------------------\\t|------------------\\t|------------------\\t|\\n| dec-phase-2 \\t| $\\\\Xi_{max}$ \\t| $93.04 \\\\pm 0.01$ \\t| $93.08 \\\\pm 0.08$ \\t| $93.19 \\\\pm 0.16$ \\t|\\n| \\t| $1/2 \\\\Xi_{max}$ \\t| $92.99 \\\\pm 0.30$ \\t| $93.05 \\\\pm 0.16$ \\t| $93.11 \\\\pm 0.17$ \\t|\\n| \\t| $1/4 \\\\Xi_{max}$ \\t| $92.87 \\\\pm 0.11$ \\t| $92.94 \\\\pm 0.03$ \\t| $93.06 \\\\pm 0.07$ \\t|\\n| dec-phase-3 \\t| $\\\\Xi_{max}$ \\t| $92.60 \\\\pm 0.00 $ \\t| $92.86 \\\\pm 0.16$ \\t| $92.87 \\\\pm 0.23$ \\t|\\n| \\t| $1/2 \\\\Xi_{max}$ \\t| $92.82 \\\\pm 0.21$ \\t| $92.90 \\\\pm 0.18$ \\t| $92.99 \\\\pm 0.25$ \\t|\\n| \\t| $1/4 \\\\Xi_{max}$ \\t| $92.85 \\\\pm 0.24$ \\t| $92.94 \\\\pm 0.19$ \\t| $92.97 \\\\pm 0.20$ \\t|\"}",
"{\"title\": \"Response to R4\", \"comment\": \"We thank the reviewer for the time and valuable feedback. We will add corrections/clarifications as suggested. Please find answers to specific comments below:\\n\\n### Convergence analysis v.s. Generalization performance\\nPlease refer to our [general response](https://openreview.net/forum?id=bIrL42I_NF8¬eId=R4dEk-WDEZM).\\n\\n### The extension to the primal-dual methods\\nOur paper aims to understand the limitations of decentralized SGD in deep learning for better algorithm design. \\nTo the best of our knowledge, we are not aware of practical primal-dual algorithms designed for decentralized deep learning. Extending primal-dual type algorithms to decentralized deep learning itself is non-trivial and is beyond the scope of this paper. \\n\\nWe will add a discussion of these two papers in our related work section.\"}",
"{\"title\": \"Response to R1\", \"comment\": \"We thank the reviewer for the time and valuable feedback. We will add corrections/clarifications as suggested. Please find answers to specific comments below:\\n\\n### Convergence analysis v.s. Generalization performance (Question 1)\\nPlease refer to our [general response](https://openreview.net/forum?id=bIrL42I_NF8¬eId=R4dEk-WDEZM).\\n\\n### Results on SGD without momentum (Question 3)\\nPlease refer to our [general response](https://openreview.net/forum?id=bIrL42I_NF8¬eId=R4dEk-WDEZM).\\n\\n### Detailed derivatives for Remark2 (Question 2)\\nWe polished and re-formulated the derivatives for Remark 2 in Appendix A.1\\n\\n### Answer to Question 6\\nThanks for your comment, we will rename Remark 2 as Proposition 2 as both are important.\\n\\n### Reference for Lemma 4 (Question 5)\\nWe will add references in the new version.\"}",
"{\"title\": \"Response to R3\", \"comment\": \"We thank the reviewer for the time and valuable feedback. We will add corrections/clarifications as suggested. Please find answers to specific comments below:\\n\\n### Convergence analysis v.s. Generalization performance (Question 1)\\nPlease refer to our [general response](https://openreview.net/forum?id=bIrL42I_NF8¬eId=R4dEk-WDEZM).\\n\\n### Answer to the question \\\"there is no convincing explanation that the critical distance contributes to the generalization\\\" (Question 4)\\nPlease first refer to our *general response* titled \\u201cConvergence analysis v.s. Generalization performance\\u201d, where we explain how the critical distance derived from the convergence analysis can help us better understand the generalization. In addition to the general response, we also include some extra explanations below.\\n\\nOur claim for the critical distance for generalization is based on the extensive numerical results. For instance, in the case of node n=32 and dec-phase-1 in Table 2, generalization performance without consensus distance control is significantly lower than the fully centralized training upper bound (91.78 v.s. 92.82). However, one can recover the upper-bound performance (0 consensus distance) by reducing the distance to $1 / 4$ of the maximum distance in the case of without control, which satisfies our description of \\u2018critical distance\\u2019. *It demonstrates how the critical distance impacts the optimization quality of the critical initial training phase and thus impacts the final generalization*. More elaborate results can be found in Appendix Table 8. \\n\\n### The definition of the \\u2018training phases\\u2019 and the choices of learning rate (Question 4)\\nWe follow the SOTA learning rate schemes for CV tasks in distributed deep learning [1, 2], thus we warm up the learning rate (over a very small fraction of training epochs: 5/300) and use stage-wise learning rates: our considered training phases are separated by the learning rate decay (i.e., epoch 0-150 and epoch 150-225, and epoch 225-300).\\n\\nUnder this definition of the training phases, there is a dramatic difference between phases and high consistency within the phase. More precisely, within each phase, key properties related to optimization, such as gradient norm and smoothness (see Figure 5 in appendix), exhibit high consistency; besides, as shown in Figure 1, the consensus distance for normal decentralized training stays on the same level.\", \"the_choice_of_the_initial_learning_rate_does_not_impact_the_division_of_the_training_phases\": \"the value difference in the reasonable candidates of the learning rate is far less than the difference introduced by the learning rate decay (with the factor of 10).\\nWe do have similar observations even we use a different learning rate (c.f. Table 11 in appendix).\\n\\n### Answer to the question \\u201chow was the critical distance calculated, how were L and sigma estimated?\\u201d (Question 3)\\nWe only empirically examined the existence of the *critical* consensus distance, and we did not compute the *critical* consensus distance in a closed-form. \\nMore precisely, we empirically measured and controlled the *consensus distance* as defined in Section 3.2 ($\\\\Xi_{t}^{2} := \\\\frac{1}{n} \\\\sum_{i} || x_{i}^{(t)} - \\\\bar{x}^{(t)} ||^2$), by controlling which we examined the existence of the *critical* consensus distance (as shown e.g. in Figure 2, Table 2 & 3 & 4 & 5). \\n\\n### The significance of the drop in generalization (Question 5)\\nDecentralized learning still encounters several quality drop issues on other communication topologies. We believe the mentioned 0.5% quality drop (e.g. from 92.82 to 92.27) on a relatively small scale graph (n=32) is significant for deep learning training. The gap will widen dramatically when considering the larger scale decentralized learning (e.g. the case of n=64 we considered in the main text).\\n\\n### Results on SGD without momentum (Question 2)\\nPlease refer to our [general response](https://openreview.net/forum?id=bIrL42I_NF8¬eId=R4dEk-WDEZM).\\n\\n### References\\n1. Accurate, Large Minibatch SGD: Training ImageNet in 1 Hour. Goyal et al, 2017.\\n2. Bag of Tricks for Image Classification with Convolutional Neural Networks. He et al, CVPR 2018.\"}",
"{\"title\": \"General response to all reviewers\", \"comment\": \"### Convergence analysis v.s. Generalization performance\\nOur paper first identifies the critical parameter to address the optimization difficulty in decentralized optimization (when comparing the convergence rate with that of centralized SGD). We theoretically derive the critical consensus distance based on the convergence analysis and empirically justify its impact/usefulness on training performance (as shown in Figure 2a & 2b) for decentralized deep learning. We further thoroughly examine the effectiveness of the proposed metric on the generalization performance (the main metric in deep learning).\\n\\n**From convergence analysis to better understand generalization**. A line of recent research reveals the interference between initial training (optimization) [2, 3, 4] and the later reached local minima (generalization) [1, 5, 6, 7, 8]: the generalization of the deep nets cannot be studied alone via vacuous generalization bounds, and its practical performance is contingent on the critical initial learning (optimization) phase, which to some extent can be characterized by the conventional convergence analysis [2, 3, 4, 5, 6, 7]. \\nThis motivates us to derive the metric (i.e. critical consensus distance) from the convergence analysis, for the examination of the consensus distance (on different phases) in decentralized deep learning training. For example, (1) we identify the impact of different consensus distances at the critical learning phase on the quality of initial optimization, and the final generalization [2, 3, 4, 5] (i.e. our studied case of dec-phase-1), and (2) we reveal similar observations as in [5, 6, 7] when the optimization is no longer a problem (our studied case of dec-phase-2), where the existence of consensus distance can act as a form of noise injection [5] or sampling models from the posterior distribution [6, 7] (the detailed discussion can be found in our response to R2).\\n\\nWe will clarify these points in the next revision.\\n\\n### Results on SGD w/ and w/o momentum\\nCurrent analysis on SGD with momentum is rather loose and does not characterize the acceleration benefit observed in deep learning practice. Thus, building the consensus distance analysis on top of loose momentum analysis may not be as meaningful.\\nOur work aims to better understand the limitations of SOTA training methods in decentralized deep learning and thus our empirical understandings are on top of these SOTA training schemes. We additionally include experiments on SGD without momentum, to demonstrate that our claim on the relation between consensus distance and generalization performance stands regardless of the use of momentum.\\n\\nNumerical Results on Vanilla SGD (n = 32, 64). For n=32, 64, we use the scaling-up factor of 32, 64 respectively, and run experiments over 3, 2 seeds respectively. We can observe the consistent pattern as in the case of SGD with Nesterov momentum presented in our main paper. This validates the coherence between our theory and experiments. Note in the case of $n=32$, the performance gap between \\u2018ring\\u2019 and \\u2018complete\\u2019 is not significant, however, the pattern manifests in the case $n=64$ where the performance gap is considerable.\\n\\n| \\t| \\t| n=32 \\t| n=64 \\t|\\n|-------------\\t|-----------------\\t|------------------\\t|-------------------\\t|\\n| ring \\t| \\t| $90.30 \\\\pm 0.14 \\\\quad$ \\t| $88.92 \\\\pm 0.23$ \\t|\\n| complete \\t| \\t| $90.64 \\\\pm 0.19$ \\t| $90.58 \\\\pm 0.26 $ \\t|\\n| dec-phase-1 \\t| $\\\\quad \\\\Xi_{max} \\\\quad$ \\t| $90.51 \\\\pm 0.05$ \\t| $88.80 \\\\pm 0.03$ \\t|\\n| \\t| $\\\\quad 1/2 \\\\Xi_{max} \\\\quad$ \\t| $90.74 \\\\pm 0.14$ \\t| $89.89 \\\\pm 0.03$ \\t|\\n| \\t| $\\\\quad 1/4 \\\\Xi_{max} \\\\quad$ \\t| $90.88 \\\\pm 0.37$ \\t| $90.43 \\\\pm 0.05$ \\t|\\n| dec-phase-2 \\t| $\\\\quad \\\\Xi_{max} \\\\quad$ \\t| $90.64 \\\\pm 0.18$ \\t| $90.63 \\\\pm 0.37$ \\t|\\n| \\t| $\\\\quad 1/2 \\\\Xi_{max} \\\\quad$ \\t| $90.55 \\\\pm 0.19$ \\t| $90.46 \\\\pm 0.15$ \\t|\\n| \\t| $\\\\quad 1/4 \\\\Xi_{max} \\\\quad$ \\t| $90.57 \\\\pm 0.17$ \\t| $90.63 \\\\pm 0.25$ \\t|\\n\\n### Reference\\n1. Implicit Regularization in Deep Learning. Behnam Neyshabur, Ph.D. Thesis, 2017.\\n2. The Break-Even Point on the Optimization Trajectories of Deep Neural Networks. Jastrzebski et al, ICLR 2020.\\n3. Time Matters in Regularizing Deep Networks: Weight Decay and Data Augmentation Affect Early Learning Dynamics, Matter Little Near Convergence. Golatkar et al, NeurIPS 2019.\\n4. Critical Learning Periods in Deep Networks. Achille et al, ICLR 2019.\\n5. Don't Use Large Mini-Batches, Use Local SGD. Lin et al, ICLR 2020.\\n6. Stochastic Weight Averaging in Parallel: Large-Batch Training that Generalizes Well. Gupta et al, ICLR 2020.\\n7. Averaging Weights Leads to Wider Optima and Better Generalization. Izmailov et al, UAI 2018.\\n8. On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima. Keskar et al, ICLR 2017.\"}",
"{\"title\": \"Comments on the effect of consensus in decentralized deep learning\", \"review\": \"This work investigated a very interesting topic about generalization in decentralized deep learning. The authors identify the consensus distance as the key factor that affects the generalization performance of decentralized training. In general, the paper is well written and there are several interesting observations and discoveries involved regarding the generalization performance of decentralized learning. But the quality and significance of the work seem not very high.\\n\\n1. There is no clear link between the theory part and the numerical results. (Th1 is based on previous work.) The other results, e.g., remark 2, proposition 3, and lemma 4 cannot claim how the consensus distance affects the generalization error. All the statements are based on the observations in terms of consensus distance shown in eq4. I can only agree that the distance is related to the generalization error.\\n\\n2. Also, Th1 quantified the convergence rate for SGD, while in the numerical results the authors used accelerated SGD and adam. \\n\\n3. How did the critical distance be calculated? For example, what are L and sigma approximated?\\n\\n4. From the numerical results, the authors at least claim two points of linking the critical consensus distance and the performance: i) the critical distance is important to the initial training phase; ii) a non-negligible consensus distance can improve the generalization performance. There is no convincing explanation that the critical distance contributes to the generalization. Also, there is no clear definition of either the initial training phase or middle phase, since the learning rate is chosen by the authors so that it might not reflect the true convergence phases. (especially in this case a warm-up scheme was used). In all, the discussion about the relation between the general error and critical distance is vague.\\n\\n5. Except for the ring case, the generalization error of the most decentralized learning results might be worse than the centralization learning within 0.5%, which seems not that significant. Comparing the linear speed up benefited from the decentralized training, is this loss significant? \\n\\nIn summary, I don\\u2019t think the theory part is very strong in this paper, and the relation between the critical distance and the generalization error needs to be further justified.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An initial step towards understanding decentralized training\", \"review\": \"Summary:\\n\\nThis paper studies the problem of decentralized training where several computing units are used simultaneously to process the data, and computing units are assumed to be connected over a network. The main focus is to better understand the role of consensus, or lack there of, into the generalization abilities of decentralized training. The authors describe an upper bound for dissimilarity of local variables that guarantees the performance of decentralized training is as good as centralized one. Moreover, some heuristic guidelines are proposed to control consensus during training process. Some numerical evidence is also provided.\", \"reasons_for_score\": \"I believe the paper is well written and the results are useful for the literature. There are a couple of issues that need to be addressed. Moreover some context item that need to be elaborated more carefully.\\n\\nSome items that need to be elaborated.\\n\\n1. The main argument of the paper sees to be that generalization might be affected by decentralized training, as initially pointed out by Table 1. However, at some point there is a conceptual leap and the discussion transforms into analyzing convergence rates. Although there is a connection between rates and generalization, one is not equivalent to the other. The provided analysis is done on rates, I do not think one can translate that into generalization so straightforward.\\n\\n2. The authors claim to analyze the problem theoretically. However, wha seems to be the main result is left as Remark 2. I believe Remark 2 is a statement that needs to proven, as represents the main issues addressed in the paper, namely, how consensus affects convergence rates.\\n\\n3. The authors mention that the analysis is made on non-momentum algorithm, but experiments are made with the momentum version. This is an issue, as the translation of the obtained results into the momentum method needs to be proven. How are the authors sure that momentum does not play a role into the dependency on consensus?\\n\\n4. One main concept seems to be that \\\\phi_t does not change too fast. This is left for the appendix. Such a main concept needs to be spelled out in the main text.\\n\\n5. Is there a cite for Lemma 4?, there seems to be studied in the literature before.\\n\\n6. I read Remark 2 as the main result rather than Proposition 3.\\n\\n7. I value the experimental results, they are rather informative and complete.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Great topic but need more thoughtful discussions\", \"review\": \"The authors consider the decentralized optimization problem and explain the generalization gap using the consensus distance. They show that when the consensus distance does not grow too large, the performance of centralized training can be reached and sometimes surpassed. The conducted experiments are extensive and the delivered message is pretty clear -- Critical consensus distance exists in the initial training phase and ensures good optimization and generalization, while a non-negligible consensus distance at middle phases can improve generalization over centralized training.\\n\\nOn the theory side, the main contribution is Remark 2 and proposition 3, but why remark 2 relates to generalization are unclear (it only shows the convergence rate), neither does proposition 3 (it only shows the consensus distance). How do we relate the convergence rate differences with the generalization capability is unclear. So I would say the abstract is a bit overclaiming, the authors better tune down their claims to practical only, without any theoretical guarantees -- \\\"We identify the changing consensus distance between devices as a key parameter to explain the gap between centralized and decentralized training. We show that when the consensus distance does not grow too large, the performance of centralized training can be reached and sometimes surpassed.\\\" \\n\\nOn the literature side, besides the gossip-based decentralized methods, there are also many primal-dual based decentralized optimization methods [1,2]. In those methods, there will be no mixing matrix and hard to run multiple mixing steps, the authors better also comment on those and discuss how the proposed findings can help these works. \\n\\nOverall speaking, I feel the motivation and message delivering is clear, though I am afraid that the main contribution falls into the practical findings (they are also important though) instead of the theoretical guarantees -- there is a mismatch between theory and implementations. \\n\\n[1] Mingyi Hong, Davood Hajinezhad, and Ming-Min Zhao. \\\"Prox-PDA: The proximal primal-dual algorithm for fast distributed nonconvex optimization and learning over networks.\\\" International Conference on Machine Learning. 2017.\\n[2] Haoran Sun and Mingyi Hong. \\\"Distributed non-convex first-order optimization and information processing: Lower complexity bounds and rate optimal algorithms.\\\" IEEE Transactions on Signal Processing 67.22 (2019): 5912-5928.\\n\\n------\\nupdate after rebuttal\\n\\nAfter reading the author's response, the authors stated that they indeed identify the optimization difficulty and consensus distance in theory, while only empirically justify its generalization on training performance. As also pointed out by reviewers 1 and 3, the gap between the convergence rate/consensus distance and the generalization capability still exists, causing the mismatch between the theory and the simulations. But at the same time, the work can also serve as an initial good start and raises good points for the literature. I will keep my score unchanged.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Useful contributions on decentralized methods for training deep network, if somewhat incremental\", \"review\": \"This paper studies decentralized gradient methods for training deep networks. It focuses on the so-called \\\"critical consensus distance\\\" and how disagreement during different stages of training ultimately effects optimization (training loss) and learning (generalization error). Theory is provided for the case of synchronous symmetric averaging methods, and the paper is complemented with detailed experiments on CIFAR and tiny-ImageNet.\\n\\nThis is a nice contribution to the growing literature on decentralized training for deep neural networks. The connection between consensus distance and performance has previously been studied to a limited extent in various settings, so the contribution of this work is somewhat incremental. However, this paper makes the connection somewhat more rigorous through the theoretical developments in Section 3, and it provides a more detailed empirical investigation than previous work. I expect the results to be useful to those working on decentralized training and am supportive of accepting it.\\n\\nI have a few suggestions and comments, about which I look forward to hearing from the authors.\\n1. You mention that consensus distance has previously been investigated to some extent (e.g., Fig 2 in Assran et al. 2019). Are there connections between consensus distance and other quantities that have been considered in the literature to relate training to performance (e.g., gradient diversity as in Yin et al. 2017, or the closely related gain ratio in Johnson et al., 2020). Similarly, is there a connection to stochastic weight averaging (Izmailov et al. 2018) and it's parallel version (Gupta et al., 2020)?\\n2. It is not clear if there are specific aspects of the tasks considered that are important for the findings to hold. CIFAR-10 and ImageNet-32 are both relatively small datasets. Is it possible that in the centralized setting, ResNet-20 is overfitting, and the error from decentralized SGD has a regularizing effect, leading to better generalization? It would be interesting to perform further experiments to explore if this is the case. \\n3. It would also be interesting to know if the results similarly carry over to the standard (higher-resolution) ImageNet training and models (e.g., ResNet-50), to know if the phenomena observed are relevant to large-scale training. While I appreciate that CIFAR and ImageNet-32 experiments are useful for quick experimentation, and running experiments on the standard ImageNet task are much more computationally expensive, CIFAR and ImageNet-32 are not very reflective of tasks where one would normally use distributed or decentralized training, since one can easily train a model on them using a single GPU in a reasonable amount of time (~1 hour).\\n4. Regarding the experiments in Table 5 (longer training), why focus on prolonging training in phase 1? I would expect that extending later phases would potentially allow to overcome issues due to large consensus distances in phase 1. Did you explore this?\\n5. The analysis focuses on symmetric (push-pull) mixing. Do you expect the same trends to carry over to push-only methods such as those considered in Assran et al., 2019?\\n6. Nowadays, the half-cosine learning rate schedule is also commonly used for CV tasks (He et al. 2018). How do you expect this to affect CCD and the analysis leading to Remark 2?\\n7. How does using more gossip iterations impact the practical utility of these methods? In particular, standard implementations of all_reduce only require that each node communicate 2 copies of the parameters per iteration. Now that we need to potentially perform multiple rounds of gossip between each optimizer update, are decentralized methods still attractive for reducing overall training time? On a related note, Tsianos and Rabbat (2014) also proposed to use multiple rounds of gossip to essentially reduce the CCD for convex problems, and show that it can lead to overall less communication overhead to reach a desired level of accuracy. Is it possible to show something similar in this setting?\", \"additional_references_mentioned\": [\"Gupta, Serrano, DeCoste, \\\"Stochastic weight averaging in parallel: Large-batch training that generalizes well,\\\" ICLR 2020 and arxiv:2001.02312\", \"He, Zhang, Zhang, Zhang, Xie, and Li, \\\"Bag of tricks for image classification with convolutional neural networks,\\\" CVPR 2019 and arxiv:1812.01187\", \"Izmailov, Podoprikhin, Garipov, Vetrov, and Wilson, \\\"Averaging weights leads to wider optima and better generalization,\\\" arxiv:1803.05407\", \"Johnson, Agrawal, Gu, and Guestrin, \\\"AdaScale SGD: A user-friendly algorithm for distributed training\\\" ICML 2020 and arxiv:2007.05105\", \"Tsianos and Rabbat, \\\"Efficient distributed online prediction and stochastic optimization with approximate distributed averaging\\\" IEEE Trans Signal and Information Procesing over Networks 2016 and arxiv:1403.0603\", \"Yin, Pananjady, Lam, Papailiopoulos, Ramchandran, and Bartlett, \\\"Gradient diversity: A key ingredient for scalable distributed learning,\\\" AISTATS 2018 and arxiv: 1706.05699\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
UOOmHiXetC | Structure and randomness in planning and reinforcement learning | [
"Piotr Kozakowski",
"Piotr Januszewski",
"Konrad Czechowski",
"Łukasz Kuciński",
"Piotr Miłoś"
] | Planning in large state spaces inevitably needs to balance depth and breadth of the search. It has a crucial impact on planners performance and most manage this interplay implicitly. We present a novel method $\textit{Shoot Tree Search (STS)}$, which makes it possible to control this trade-off more explicitly. Our algorithm can be understood as an interpolation between two celebrated search mechanisms: MCTS and random shooting. It also lets the user control the bias-variance trade-off, akin to $TD(n)$, but in the tree search context.
In experiments on challenging domains, we show that STS can get the best of both worlds consistently achieving higher scores. | [
"reinforcement learning",
"uncertainty",
"model-based",
"MCTS"
] | Reject | https://openreview.net/pdf?id=UOOmHiXetC | https://openreview.net/forum?id=UOOmHiXetC | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"P8eZ5kF-Vx",
"DNY8mJG2gK",
"-qG5CNrZKW4",
"IwpryegQFNX",
"idoPaI-3Ue8",
"aJ0EHW1oeyt",
"vIYN3-5iSf",
"3ZS5k5I9pYJ",
"CknqFGIrT6W",
"90ja337j_CR",
"f1DDMK7BMq4",
"H9U-XB0g8W",
"WOEvWtzo1h8",
"2CSPQtGSJsB",
"vvUx6UGCABv",
"ZonZbyneQa5"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040407573,
1605902834307,
1605902772480,
1605902603813,
1605902452863,
1605902167489,
1605615756839,
1605615617220,
1605615399562,
1605615239560,
1605614854761,
1604676629331,
1604305153915,
1604043121208,
1604016744273,
1603640536179
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3720/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3720/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3720/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3720/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3720/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3720/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3720/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3720/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3720/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3720/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3720/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3720/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3720/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3720/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3720/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes a modification to MCTS in which a sequence of nodes (obtained by following the policy prior) are added to the search tree per simulation, rather than just a single node. This encourages deeper searches that what is typically attained by vanilla MCTS. STS results in slightly improved performance in Sokoban and much larger improvements Google Research Football.\\n\\nR4 and R1 both liked the simplicity of the idea, with R1 also praising the paper for the thoroughness of its evaluation. I agree that the idea is interesting and worth exploring, and am impressed by the scope of the experiments in the paper as well as the additional ones linked to in the rebuttal. However, R1 and R5 explicitly noted they had many points of confusion, and across the reviews there seemed to be many questions regarding the difference between STS and other variants of MCTS. I also needed to read parts of the paper multiple times to fully understand the approach. If this many experts on planning and MCTS are confused, then I think readers who are less familiar with the area will definitely struggle to understand the main takeaways. While I do think the clarifications and new experiments provided in the rebuttal help, my overall sense is that the paper at this stage is not written clearly enough to be ready for publication at ICLR. I would encourage the authors to try to synthesize their results and organize them more succinctly in future versions of the paper.\", \"one_comment_about_a_point_of_confusion_that_i_had\": \"I noticed the PUCT exploration parameter was set to zero for Sokoban, and one for GRF (with an explanation given that many values were tried, though these values are unspecified). As the exploration parameter is normally considered to be the thing that controls whether MCTS acts more like BFS ($c = \\\\infty$) or DFS ($c = 0.0$), I would encourage the authors to more explicitly report which values they tried and to be clearer about the advantage of STS's multi-step expansions over low values of the exploration parameter.\"}",
"{\"title\": \"New experimental results\", \"comment\": \"We gently note that we update our answer with some new experimental results in our answers to the other reviewers.\"}",
"{\"title\": \"We update our answer with new experimental results\", \"comment\": \"We update our answer with new experimental results. Here we present the most relevant results to this review and we encourage the reviewer to take a look at the remaining answers, where we also discuss the results of several new experiments.\\nWe thank the reviewer for the suggestion for comparison with AlphaGo-style leaf evaluation using rollouts. We have run two additional experiments with rollout-based evaluation using different policies. In each experiment, the rollout was truncated after 10 steps (to complete them before the end of the rebuttal phase and ensure fair comparison with STS).\\n\\nThe return after the last step of the rollout was approximated using the value network. This value and rewards collected were used to calculate the leaf's value in the same way as in AlphaGo. In experiments, we tested two strategies for generating rollouts:\\n\\n1. Actions sampled from the prior policy - the same setup as in AlphaGo, except for rollout truncation and the policy's choice. AlphaGo used a policy pretrained on expert data. Since we do not have access to such data for Google Football, we instead used the prior policy trained over the course of the algorithm.\\n2. Actions chosen deterministically, to maximize Q(s, a) + exploration_weight * \\\\pi(a | s). Q was computed by a neural network and \\\\pi is the probability given by the trained prior distribution. We recall that this setup is equivalent to STS except for the crucial fact that STS adds the expanded leaves to the search tree.\\n\\nWe observed that strategy 1. performed very poorly, which highlights the importance of using neural networks for leaf evaluation. Strategy 2. performed significantly better but still worse than STS. In our opinion, these results strengthen the evidence that the advantage of STS stems from the algorithmic reasons by building a more efficient search tree. Note the tasks on which the two evaluated strategies performed the worst, i.e. counterattack_easy, counterattack_hard, single_goal_versus_lazy, are those with the longest lengths of a successful episode. This supports the argument that STS better handles problems with long planning horizons.\\n\\n(In experiments, we used three seeds and we reported their median. Numerical results are available [here](https://postimg.cc/94GjXdrH) 1 is marked as (Q + policy r.) and 2 as (Q + det. r.)).\"}",
"{\"title\": \"We update our answer with new experimental results\", \"comment\": \"We update our answer with new experimental results. Here we present the most relevant results to this review and we encourage the reviewer to take a look at the remaining answers, where we also discuss the results of several new experiments.\\n\\nWe present partial results with a bigger number of passes. A complete experiment will be presented in the final version of the paper. \\n\\nWe ran a Sokoban experiment with an expansion of 500 nodes per move. We found out that after 1 million steps STS and MCTS had similar results (MCTS 85.5% solved rate, STS 86/86.5% *). It is hard to draw definite conclusions; we speculate that MCTS gains some advantage in early training due to more methodical BFS-like search (possible within a high computational budget). These gains seem to disappear later on, suggesting that STS works well also in large scale settings. \\n\\n(We ran two versions of STS - (1) 100 passes, H=5 and (2) 50 passes, H=10. Results are averaged over 5 runs. Graphs can be found [here](https://postimg.cc/4KrTs8J5)).\"}",
"{\"title\": \"Update with new experimental results\", \"comment\": \"Here we present the most relevant results to this review and we encourage the reviewer to take a look at the remaining answers, where we also discuss the results of several new experiments.\\nWe conducted experiments with STS and MCTS with the sparse reward version of Sokoban. Namely, reward is obtained only for solving the board. We have not observed significant differences from the previous experiments; in particular, STS is visibly better than MCTS. We speculate that the setup presented originally in the paper is already quite sparse (additional reward is presented by placing the first of two boxes).\\n(Results are available [here](https://postimg.cc/DJM4v3SN); results are averaged over 5 runs for sparse settings and 10 runs for dense)\"}",
"{\"title\": \"We update with new experimental results\", \"comment\": \"We update with new experimental results. We also point out the reviewer's attention to the new answer to Rev2 in which we show results comparing our methods to AlphaGo.\\n\\nWe conducted experiments on Sokoban with a simpler backpropagation scheme similar to Soemers et al (2016). Namely, we backpropage only the value from the last node of the rollout, instead of the \\u2018mega-backprop\\u2019 of STS. The results are much worse - after 12 million steps STS with such simple backpropagation obtained on average 79% solve ratio (compared to 89% for STS presented in paper). (The graph is available [here](https://postimg.cc/3yztdV0k). Results are averaged over 10 training runs.)\"}",
"{\"title\": \"We thank the reviewer for the detailed review. We believe that it will lead to the improvement of our work; we are preparing the revision\", \"comment\": \"We thank the reviewer for the detailed review. We believe that it will lead to the improvement of our work; we are preparing the revision.\\n\\n**Weak point 3**: We agree that this part of the text could be written more clearly, and we will do so in the revised version. Answering your question, you are right: C is the number of passes in one planning step, while N_p is the total number of passes in the whole episode (until the solution is found). We also point to Table 4 (extending Table 1) to give more intuition about the memory used. One more clarification is perhaps also worth stating. In every planning step, an action is chosen, and the subtree corresponding to this action is retained to the next planning step. This is a rather standard: it improves the search quality but also increases memory consumption. In principle, the latter could be problematic, however, we did not observe this to be the case in our experiments. \\n\\n**Weak points 1 and 2**: we thank you for pointing out the references [1] and [2], we will include them in the related work section with appropriate discussion (see the text below). Our approach is different, in the sense that we operate in the modern 'post-AlphaZero' context. We are interested in algorithmic aspects, and our work is meant to make some steps towards understanding the tradeoff between breadth-first search and depth-first search as well as between bias and variance. As far as we understand, this is not present (explicitly) in these previous works. More precisely, let us point out the differences with [1]. Each STS step is composed of three elements: a) expansion of H consecutive nodes, b) addition of the expanded nodes to the tree, c) evaluation of H expanded nodes by a neural network value function approximator and backpropagation of each of these values (which for better efficiency is squashed in one 'mega' backpropagation step; hence the code for UPDATE in Algorithm 6). All of this is embedded in a reinforcement learning training loop. Although [1] expands multiple nodes, it backpropagates the 'game score value' of a final state of the simulation. We on the other hand learn this value (using RL) and benefit from the averaging effect of multi-step expansion. In the course of research leading to this publication, we performed experiments with multiple backpropagation schemes on Sokoban, which underperformed, being even worse than the standard MCTS. This ablation will be included in the revision.\\n\\nWe also note that, if properly used, our method does not increase memory usage much. We observed this throughout many experiments and checked rigorously in Sokoban's isolated setting, see Table 1 and Table 4. We argue that for moderate values of the multi-step expansion, the size of the tree does not increase significantly, and sometimes even decreases. In our view, this supports the hypothesis that STS builds 'a better search tree', when H is appropriately set. From our experiments, we recommend H=10 as the starting point.\\n\\nConcerning [2], the work presents interesting ideas of adding rollout (or their parts) to the search tree and various backup operators (including uncertainty awareness). This is similar to STS, though the crucial difference is that we use neural network estimates of values and never do full rollouts.\\n\\nWe also thank you for \\u2018minor points\\u2019 which we agree with. We will modify the text accordingly. In particular, Lemma A.6.1 is meant to be merely an illustration. Having understood that the assumption is indeed strong, we decided to put it only into the appendix (we consider removing it in the revision).\\n\\n[1] Enhancements for Real-Time Monte-Carlo Tree Search in General Video Game Playing \\n[2] Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for your review.\\n\\nThere is an important difference between STS and the variant of MCTS used in AlphaGo - STS adds states visited during the rollout from the leaf to the tree, while AlphaGo does not (we currently run and going to add an ablation comparing STS to AlphaGo-style simulation to the paper). This has several implications: a) due to the averaging, a better value is backpropagated up the tree (technically, this is more akin to TD-lambda as we evaluated the value function estimate in each step of multi-step expansion), b) arguably a more efficient search tree is built. Our intuition is that the tree is deeper, and the paths explored in multi-step expansion can easily be branched out during later planning passes. \\n\\nOn the experimental side, we provide an isolated study in Table 1, where some effects on the tree statistics can be seen. Last but not least, Table 2 shows substantial improvements in GRF (e.g. 40%-100% absolute improvement in solved rate on the more challenging tasks). \\n\\nWe run several thousands of tuning experiments (including a search over the PUCT parameter), which leads us to believe that the mentioned performance improvement is due to algorithmic properties of the proposed STS mechanism. We note that we concentrate on rather modest computational budgets, as in our view, it is an important regime for applications. We speculate that in this regime, algorithmic improvements might be more relevant as opposed to more brute-forced cases.\"}",
"{\"title\": \"We thank the reviewer for comments and questions. We will prepare a revised version of the paper, which in particular will expand on the related work\", \"comment\": \"We thank the reviewer for comments and questions. We will prepare a revised version of the paper, which in particular will expand on the related work.\\n\\nWe agree that the difference between MCTS and STS might be negligible when the number of simulations is large. That being said, we see much value in developing methods that perform well with smaller computational budgets. There is a clear practical aspect: using large numbers of passes (like aforementioned 1600) makes the method out of reach for many real-world uses, where the model of the environment is costly to run. Operationally, studying the methods relying on high computation budgets is the luxury that only several big and well-funded research labs can afford. From a more theoretical (or philosophical) point of view, we argue that putting constraints on the computational budget might be an important aspect of measuring 'intelligence'. Although we are fine with the fact that many recent advancements in AI heavily hinged on computational power, we sympathize with the view that the learning system's quality should also be measured along the resources axis. As neatly phrased by Lake et al. [1]: \\\"One worthy goal would be to build an AI system that beats a world-class player with the amount and kind of training human champions receive \\u2013 rather than overpowering them with Google-scale computational resources.\\\"\", \"answering_questions\": \"1. This is an interesting question. We believe that the series of papers [3], [4], [5] provided quite substantial evidence that the MCTS planner with value function evaluation (AlphaZero) replacing the policy rollouts (AlphaGo) is more powerful and simpler. Using rollout policies has several disadvantages. Some can be exemplified by the environments used in our experiments. In Sokoban, for instance, there are multiple \\u2018dead-end\\u2019 states (i.e., states from which the agent cannot reach the goal position) and our experiments showed that planning is essential for avoiding these pitfalls. A pure neural network performs much weaker (usually a drop is around >20% of solved ratio), and even worse for a random policy rollout. In GRF, using rollout is perhaps a better option, although it comes at the cost of computation needed to run this complex simulator. \\nWe also note that \\u2018shooting experiments\\u2019 can be seen as a proxy to answer this question. A slightly speculative conclusion would be that for some environments using rollouts yields much worse results (due to bias and variance) - Sokoban in our case. There are also environments where the estimates are sharp enough to get progress (GRF in our case and environments in [2]). Understanding the circumstances when such property holds is an interesting research question. \\n2. As mentioned above, we concentrate on a modest computation regime due to conscious philosophical and practical choices. However, the question asked is a valid one; we currently run an experiment with a bigger number of passes and will update the answer once it is done. \\n3. It is a good question. We speculate that this is an exploration issue as due to the particular construction of rewards in GRF, the reward in the corner scenario is more sparse (the so-called checkpoint rewards are not available). Moreover, random rollouts can still provide a reliable evaluation, which probably adds up to the \\u2018shooting\\u2019 victory. Note, however, that the STS result is also quite high, and well above the results of the remaining methods. Having said that, more investigation is needed to provide a definite answer. \\n\\n\\n[1] Building Machines That Learn and Think Like People, Lake at al. \\n[2] https://cs.brown.edu/people/gdk/pubs/analysis_mcts.pdf \\n[3] Mastering the game of Go with deep neural networks and tree search - Silver, D. et al. 2016. \\n[4] Mastering the game of Go without human knowledge - Silver, D. et al. 2017. \\n[5] A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Silver, D. et al. 2018.\"}",
"{\"title\": \"Thank you for your review. We will prepare a revised version of the paper that takes the reviewer's comments into account, in particular putting more emphasis on intuition in the paper and arranging the text so it can be found in one place, as well as improving presentation of some of the results\", \"comment\": [\"Thank you for your review. We will prepare a revised version of the paper that takes the reviewer's comments into account, in particular putting more emphasis on intuition in the paper and arranging the text so it can be found in one place, as well as improving presentation of some of the results (e.g. figure 7 and 8).\", \"Intuitively, STS enables building the search tree, which might be more efficient. We found it particularly relevant in Google Football, where individual actions induce rather small changes in the environment. Experimentally, we found that MCTS builds a relatively wide tree, which poorly explores the state space. This observation is also confirmed in isolated experiments presented in Table 1 (Table 4). STS has a smoothing effect, reducing biases in neural-net value estimators. Having said that, we would like to draw the Reviewer\\u2019s attention to the following parts of the paper:\", \"The introduction describes that STS can be viewed as a mechanism controlling depth and breadth of the search and can be viewed as a bias-variance control method, hence giving STS characteristics of interpolation between MCTS and random shooting.\", \"Section 4.1 mentions STS ability to exit from the erroneous region more quickly by correcting biased value functions estimates.\", \"In Section 4.2, we provide evidence that STS gives a boost in environments requiring long-horizon planning.\", \"We devote Appendix A.7.3. to dig deeper into intuition concerning the role of STS in reducing bias.\", \"We believe the case of sparse rewards to be rather orthogonal to the relative performance of MCTS and STS. One could even argue that STS might perform better by introducing more directed exploration in the form of longer \\u201cshots\\u201d that have a higher chance of reaching the goal state than the \\u201cwide\\u201d exploration induced by MCTS. At the moment, it is just a highly hypothetical claim that requires an experimental verification (we run a simple experiment in a sparse version of Sokoban). To deal with sparsity, additional methods are required, regardless of the fact whether MCTS or STS is used. We expect that STS would blend smoothly with most of such methods. We leave this research question for a new project.\"]}",
"{\"title\": \"Thank you for the detailed review - it will improve the quality of our work. We will release a revision of the text and new experimental results. In what follows we provide a detailed answer.\", \"comment\": \"We perceive the main benefit of our method in the fact that it builds a search tree of a different shape. Put differently, STS expands the tree with `macro-actions` consisting of H consecutive steps. We found it instrumental in the Google Football experiments, in which a typical action has a small effect on the environment state, and thus also the value of that state. As a result, standard MCTS struggles by falling into the \\u201cbreadth-first\\u201d type of search; see Table 5 for full results. This causes STS to perform significantly better within the same computational budget. We understand the concerns regarding the increased memory usage. This could indeed happen if the length of the multi-step expansion was set to high. However, for moderate values (we typically use H=10), it does not seem to be the case. This is confirmed in synthetic experiments on Sokoban; see column $N_t$ in Table 1. Interestingly, in some cases, the STS search tree is slightly smaller than the MCTS one.\\n\\nIt might be worth highlighting that in all experiments (including the one referred to above), we ensure a fair comparison of MCTS and STS by providing the same computational budget. More precisely, we set the number of passes $C_{MCTS}$, $C_{STS}$ and the multi-step expansion depth H so that $C_{MCTS} = C_{STS} \\\\cdot H$ (see Algorithm 1 for notation). \\n\\nWe thank the reviewer for bringing [Soemers et al., 2016] to our attention; we shall include this work in the revision. Let us, however, point out the differences. Each STS step is composed of three elements: a) expansion of H consecutive nodes, b) addition of the expanded nodes to the tree, c) evaluation of H expanded nodes by a neural network value function approximator and backpropagation of each of these values (which for better efficiency is squashed in one \\u2018mega\\u2019 back-propagation step; hence the code for UPDATE in Algorithm 6). All of this is embedded in a reinforcement learning training loop. Although [Soemers et al., 2016] expands multiple nodes, it backpropagates the 'game score value' of a final state of the simulation. We on the other hand learn this value (using RL) and benefit from the averaging effect of multi-step expansion. In the course of research leading to this publication, we performed experiments with multiple backpropagation schemes on Sokoban, which underperformed and scored below the standard MCTS. \\n\\nThank you for the comment concerning the bias. Besides the \\\"macro-actions interpretation\\u2019 mentioned above, our method is a way to deal with the bias-variance problem. The series of papers [Silver, D. et al. 2016, 2017, 2018] provided quite substantial evidence that the MCTS planner with value function evaluation (AlphaZero) replacing the policy rollouts (AlphaGo) is more powerful and simpler. Among multiple reasons, it reduces variance Monte-Carlo estimators provided by the rollouts. Making long rollouts is also more wasteful, exemplified by our random shooting experiments (which achieve decent results but also at a higher computational cost). As such, we are of the opinion that STS is a valuable tool in the current state-of-the-art planning landscape: the idea is simple, easy to code, and can be implemented on top of many algorithms from the MCTS family. Nevertheless, the proposed ablation is interesting. We will run it and update our answer later on.\\n\\nConcerning the question about pretrained value function for Sokoban experiments presented in Table 1 (and Table 4), we clarify that it came from a separate MCTS based training. More details will be included in the revision. \\nRegarding the difference in hyperparameters of STS and Shooting presented in Appendix A.2., we confirm that this is indeed the case. We tuned each of the methods separately to ensure a fair and meaningful comparison (we have run several thousand experiments for each method to obtain the final values of hyperparameters).\\n\\nOur zero-initialization scheme for the value network is meant to ensure uniform exploration in all directions at the initial training stages. Recent large scale experiments [Andrychowicz M. et al., 2020] suggest that initialization has an important effect on RL training and suggest routine use schemes similar to \\u2018zero-initialization\\u2019, at least in a model-free setting. This is consistent with our observations. We have also experimented with optimistic (as well as pessimistic) initialization, but we have found zero initialization to perform better.\\n\\nIn the PPO shooting experiments, we used the PPO trained policy but not the value function. Therefore the planning was done using as a signal the truncated empirical return gathered on sampled trajectories of length 10. The worse results stem from this shortsightedness. We made this experiment mostly for the benchmarking reasons, as such it will be moved to the appendix in the revision.\"}",
"{\"title\": \"rather weak paper\", \"review\": [\"summary:\", \"This paper introduces Shoot Tree Search (STS), a planning algorithm that performs a multi-step expansion in Monte-Carlo Tree Search. Standard MCTS algorithms expand the search tree by adding one node to the tree for each simulation. In contrast, the proposed STS adds multiple nodes to the search tree at each simulation, where each node corresponds to the state and action that are encountered during rollout. By multi-step expansion, the evaluation of the trajectory is less-biased, which can be analogous to n-step TD. In the experiments on Sokoban and Google research football domains, STS outperforms baselines that include Random shooting, Banding shooting, and MCTS.\", \"Overall, my main concerns are technical novelty and presentation quality.\", \"The most common MCTS methods assume that the leaf node is expanded one at a time in each simulation (and its evaluation is performed either by rollout policy or by function approximator), but this common practice does not necessarily mean that MCTS should always do that. The main reason for only expanding one node per simulation in standard MCTS is memory efficiency: if we fully expand the rollout trajectory and retain its information to the search tree, we may get slightly more accurate value estimates. However, the nodes located deep in the tree will not be visited more than once in most cases, thus its effect is usually not significant, leading to the common practice of one-step expansion. More importantly, multi-step expansion has already been used in existing works (e.g. in [1], the tree is expanded by adding the whole rollout trajectory), thus I am not convinced that this work introduces a technical novelty.\", \"It seems that the relative benefit of the STS over MCTS observed in the experiments comes from the bias of the value function approximator. However, to show the effectiveness of 'multi-step' expansion compared to 'single-step' expansion, I think that more thorough ablation experiments should have been conducted. For example, we can consider the setting where both STS and MCTS perform leaf-node evaluation (i.e. UPDATE in Algorithm 5) by executing rollout policy rather than by using value function approximator. By doing so, we can focus only on the benefits of STS's retaining information of full rollout trajectory (i.e. multi-step expansion), compared to MCTS's retaining one-step information (i.e. single-step expansion) while eliminating the effect of biased value function estimation.\", \"To relieve too much bias in the current MCTS's leaf node evaluation, mixing MC return of rollout policy and the output of the value network could also have been considered, as in AlphaGo (Silver et al. 2016). It would be great to see if STS still has advantages over MCTS in various leaf node evaluation situations.\", \"Also, more writing effort may be required, and the current version of the manuscript seems premature to be published. There are some unclear or questionable parts.\", \"Algorithm 3 and Algorithm 4 are not the contributions of this work, thus they can be removed or moved to the Appendix. Instead, more discussions regarding the proposed method should have been placed in the main text.\", \"In Algorithm 2: the definition of CALCULATE_TARGET is missing.\", \"In Algorithm 5: In SELECT, the tree policy is defined by CHOOSE_ACTION that selects purely greedy action. If this describes the MCTS used in the experiments, I would say this is wrong. To make MCTS be properly working, an in-tree policy that balances exploration vs. exploitation is required (e.g. a classical choice is UCB rule).\", \"In Algorithm 6: In UPDATE, $N(s,a)$ and $quality$ are increased by $c$ times more, which means that the longer rollout length, the more weight is given. What is the reason for assigning more weight to the trajectory that has a longer rollout length? If the entire planning horizon is limited to finite length, this means that early simulations (short $path$ length, long $rollout$ length) have more weight than later simulations (long $path$ length, short $rollout$ length), but I do not think this is desirable. Is my understanding correct?\", \"For the Sokoban experiments, the pre-trained value function would significantly affect the performance of MCTS and STS, but I could not find the way how the value function was pre-trained.\", \"In Appendix A.2., the hyperparameters for Shooting and STS are very much different. Why did you set Shooting's hyperparameter differently from STS (e.g. VF zero-initialization, action sampling temp, etc.)?\", \"It seems that the choice of zero-initialization of the value network is rather arbitrary. I am not convinced that this would always work better. In some situations, optimistic initialization of the value network may be helpful to encourage exploration of the uncertain state regions.\", \"In Table 2, Why does RandomShooting-PPO underperform PPO? Since RandomShooting-PPO puts additional search efforts upon PPO, I expected that RandomShooting-PPO must outperform PPO.\", \"Table 5 could have been moved to the main text, replacing Table 2.\", \"[1] Soemers et al., Enhancements for Real-Time Monte-Carlo Tree Search in General Video Game Playing, 2016 IEEE Conference on Computational Intelligence and Games (CIG 2016)\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"New MCTS algorithm for large state spaces\", \"review\": \"Summary:\\nThis paper proposes a new algorithm named \\u2018Shoot Tree Search (STS)\\u2019 to perform planning in large state spaces. The authors construct STS by redesigning the expansion phase of MCTS using multi-step expansion. The authors provide pseudocode of the STS and compare the performance of STS and MCTS empirically in various domains, such as Sokoban, Google Research Football (GRF).\", \"comments\": \"Firstly, there is no intuitive explanation of why, what and how. Even after reading the paper, I do not agree that STS is good, because there is no intuition as to why it is better than na\\u00efve MCTS. More detail, I have a question - The main difference between STS and MCTS seems to be using multi-step expansion or 1-step expansion. Although multi-step expansion will gather more information about (s,a) pairs with high Q(s,a) value (because the actions chosen by argmax Q and STS expands such trajectories), but in sparse reward problem, STS and MCTS will work similarly. Moreover, before getting positive reward, STS may worse than MCTS because it requires more samples to explore (because STS uses more samples for (s,a) pairs with high Q-values, which is not meaningful yet). So I think that this paper needs at least discussion on an intuitive level about the advantage of STS.\\nIn addition, the empirical details in appendix (figure 7 and 8 on page 18 and 19, respectively) look weird \\u2013 each algorithm seems to have stopped randomly or incompletely.\\nAlso, the authors seem to need to make an effort to make the paper more self-contained.\\nMinor comments\\nSome abbreviations are used without its full word or phrase. For examples, MCTS (it has been used in page 1, but the full phrase appears on page 3), and RL.\\nThere are no reference for random shooting and bandit shooting. The authors should provide more explanation about them with references.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The paper presents a simple extension to MCTS search by choosing multiple actions in each call to 'expansion' phase. The main concern with the paper is the number of simulations for MCTS.\", \"review\": \"**Summary**\\nThis paper presents a new planning algorithm, called Shoot Tree Search, to control the trade-off between depth and breath of the search. STS modifies the expansion phase of tree search by choosing multiple actions (e.g. $\\\\gt$ 1) instead of one level expansion. The presented idea is simple and straightforward and seems to provide improvement over existing tree-based planning algorithms. The presented detailed ablation studies provides insights about the choices made in the paper. \\n\\n**Reasons for score**\\nOverall, I liked the paper and the simplicity of the idea. However, my major concern is the comparison with MCTS. I am not convinced that STS would outperform vanilla MCTS when the number of simulations is in order of thousands (e.g. the number of simulations in AlphaGo paper is around 1600). \\n\\n**Strengths**\\n+ The idea is simple and seems to outperform vanilla MCTS implementation in the environments with large action space.\\n\\n**Weaknesses**\\n+ The comparison with the related work is not thorough which makes it hard to come into a decisive conclusion about the performance of the proposed method.\\n+ There are some missing related work, e.g. using policy network for multiple rounds of simulations.\\n\\n**Questions**\\n+ What would the benefits if we have a policy network to perform the rollouts (e.g. a similar method to [1])?\\n+ In general, the benefit of MCTS algorithm (like AlphaGo which performs around 1600 simulations) presents itself when the number of simulations are large. Can you compare running MCTS with more number of simulations (e.g. large C) and STS?\\n+ Can you please provide some insights on why in 'Corner' STS underperform compared to random shooting?\\n\\n[1] https://cs.brown.edu/people/gdk/pubs/analysis_mcts.pdf\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"A modification of Monte Carlo tree search that produces marginal improvements that may not be present with tuning of the Monte Carlo tree search exploration parameter\", \"review\": \"The authors present a method that combines Monte Carlo tree search (MCTS) and random rollouts. The authors their relate this to the bias-variance tradeoff observed in n-step temporal difference methods. The authors evaluate their method on Sokoban and the Google Football League environment. The results show that the authors' method leads to marginal improvements on these domains.\\n\\nI do not think what the authors are doing is very novel as MCTS combined with rollouts was already used in AlphaGo. Furthermore, I believe the small difference in results can be made up by using only MCTS with a different exploration parameter (i.e. like the one that was used in the AlphaGo paper).\\n\\nI would like to know what benefits this method brings that cannot be obtained from combining MCTS with rollouts as in AlphaGo or from a hyperaparameter search with MCTS. Is there an anaylsis of the bias variance tradeoff of this method?\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"Summary:\\n---\\n\\nThe paper presents \\\"Shoot Tree Search\\\", an approach that can basically be summarised as a variant of MCTS that expands (adds to the search tree) a longer sequence of up to H nodes to the tree per iteration, as opposed to the standard approach of expanding a single node per iteration. The experiments demonstrate improved performance in comparison to a \\\"standard\\\" MCTS and a variety of simpler rollout-based planning approaches, in challenging planning domains such as Sokoban and Google Research Football.\\n\\nStrong Points\\n---\\n\\n1) Well-written, mostly easy to read and understand.\\n2) Simple but interesting idea.\\n3) Thorough empirical evaluation, interesting results.\\n\\nWeak Points\\n---\\n\\n1. The paper describes the modification of MCTS into STS, which consists of making it expand a longer sequence of up to H nodes within a single iteration, as an entirely novel way to extend MCTS, but I'm not sure that that's entirely the case. For instance, Coulom's 2006/2007 paper \\\"Efficient Selectivity and Backup Operators in Monte-Carlo Tree Search\\\" already states: \\\"In practice, not all the nodes are stored. Storing the whole tree would waste too much time and memory. Only nodes close to the root are memorized.\\\", which suggests that something like this may have already been considered, but in that case was not found to be worthwhile. The 2016 paper \\\"Enhancements for Real-Time Monte-Carlo Tree Search in General Video Game Playing\\\" describes \\\"In this paper, the tree is simply expanded by adding the whole play-out to the tree.\\\", which seems similar.\\nI do still like that the paper performs a thorough evaluation of this idea, which I am not aware of appearing in previous literature, and the setting with DNNs for value / policy function approximations is also different from aforementioned papers which may lead to different trade-offs. The use of a DNNs for value function probably changes the story quite a bit here, because the longer horizon H also changes the point at which the value function is computed, as opposed to those older papers with values estimated by random rollouts (which remains the same regardless of the horizon H). So I'm not saying the idea isn't \\\"novel enough\\\", just that some discussion of past work seems to be missing.\\n\\n2. In my experience, the primary reasons historically for the typical strategy of expanding just 1 node per iteration in standard MCTS (without DNNs) are 1) to reduce memory usage (especially when copies of game states are stored inside nodes, because then every node can be quite big), and 2) efficiency, because if you store copies of game states in nodes, and create more nodes, you also need to copy more game states (whereas a random playout without node and state storing can just roll out at once without making intermediate copies of states). I'm kind of missing a discussion of these kinds of considerations. \\n\\n3. I'm not sure that I can fully understand the experiment setup, in particular looking at Table 1. C is a hyperparameter denoting the number of planning passes, and N_p is described as \\\"the average number of passes until the solution is found\\\". How can N_p ever exceed C? Shouldn't it be upper bounded by C? I guess C might be the number of planning passes \\\"per time step\\\", and N_p is total over the entire episode, something like that? But this is not really clear to me. If the algorithms are really restricted to just C iterations of MCTS, I guess it's fair to always keep C*H constant and then my points above about memory usage / efficiency are not a big deal since they would still be equal across all scenarios... but I'm a bit confused here due to N_p exceeding C.\\n\\nOverall Recommendation\\n---\\n\\nRight now I have too many little points of confusion / missing discussion, as pointed out under \\\"weak points\\\" above, to recommend acceptance. That said, there is also enough to like about the paper, and I can easily envision that most of the points of confusion could be relatively straightforward to clear up in a revision.\\n\\nQuestions for authors\\n---\\n\\nCould you please clarify on the points raised under \\\"weak points\\\" above?\\n\\nMinor Comments\\n---\\n\\n- On first page, the comma in \\\"Google Research Football is, an advanced\\\" seems unnecessary and confusing.\\n- On page 6, the wording \\\"Shooting methods perform poorly for Sokoban\\\" could be confusing because the newly proposed \\\"Shoot Tree Search\\\" method can very easily be interpreted as also being a \\\"shooting method\\\" due to its name.\\n- In Lemma A.6.1, the assumption that STS and MCTS build the same tree T seems to me like it's a VERY strong assumption; the MCTS has to make very very specific choices, with very frequent overlap making identical choices across different iterations (inherently somewhat unlikely due to the visit count terms in PUCT and other Selection strategies), for this to be true.\\n\\nAfter Discussion\\n---\\n\\nI increased my review from marginally below to marginally above acceptance threshold. Most of the remarks I had were at least partially addressed. If the paper gets accepted, I'd still recommend looking at some of them again and clarifying more. A simple, explicit remark somewhere around Table 1 explaining that N_p can indeed exceed C due to relevant parts of the search tree being preserved across time steps would help a lot. Some more explicit discussion about why the difference between using a trained value functions vs. heuristics / terminal results matters so much that it makes this substantially different from prior work would also help (I understand that it is because in prior work the only advantage of storing all those extra nodes was really just that it could retain slightly more information from backpropgations in those nodes, whereas in your case it changes which state is the state that gets evaluated by a trained value function, but this should be more explicit in the paper).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
zv-typ1gPxA | Retrieval-Augmented Generation for Code Summarization via Hybrid GNN | [
"Shangqing Liu",
"Yu Chen",
"Xiaofei Xie",
"Jing Kai Siow",
"Yang Liu"
] | Source code summarization aims to generate natural language summaries from structured code snippets for better understanding code functionalities. However, automatic code summarization is challenging due to the complexity of the source code and the language gap between the source code and natural language summaries. Most previous approaches either rely on retrieval-based (which can take advantage of similar examples seen from the retrieval database, but have low generalization performance) or generation-based methods (which have better generalization performance, but cannot take advantage of similar examples).
This paper proposes a novel retrieval-augmented mechanism to combine the benefits of both worlds.
Furthermore, to mitigate the limitation of Graph Neural Networks (GNNs) on capturing global graph structure information of source code, we propose a novel attention-based dynamic graph to complement the static graph representation of the source code, and design a hybrid message passing GNN for capturing both the local and global structural information. To evaluate the proposed approach, we release a new challenging benchmark, crawled from diversified large-scale open-source C projects (total 95k+ unique functions in the dataset). Our method achieves the state-of-the-art performance, improving existing methods by 1.42, 2.44 and 1.29 in terms of BLEU-4, ROUGE-L and METEOR. | [
"Code Summarization",
"Graph Neural Network",
"Retrieval",
"Generation"
] | Accept (Spotlight) | https://openreview.net/pdf?id=zv-typ1gPxA | https://openreview.net/forum?id=zv-typ1gPxA | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"orjMaqTYDiu",
"qA3nsWevlk1",
"pnjV-2VO8zU",
"4oDFrhsJaK",
"VzIfDIrXNFS",
"TwSQfh4u1vj",
"ZJ6ZYqDGzm7",
"uGr4XBPR5l6",
"3wrO0jIvyCU",
"US1E9bhU1QJ",
"_5DmQwJWuo"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610716972938,
1610040362815,
1606240352652,
1606240247051,
1605975036681,
1605974595701,
1605973626938,
1605973495225,
1603944362726,
1603877638862,
1603850338329
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Paper3719/Authors"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3719/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3719/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3719/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3719/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3719/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3719/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3719/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3719/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3719/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Recomputing the METEOR scores on the CCSD benchmark using the official Meteor 1.5 script (previously the NLTK version was used)\", \"comment\": \"Dear Program Chairs and reviewers,\\n\\nWe are writing to report that we realized \\n**there was a discrepancy between the METEOR scores** computed by the NLTK package (which we used to compute METEOR scores for all baselines and our methods on the CCSD benchmark) and the METEOR scores computed by the official Meteor 1.5 script. The reason is that the NLTK version follows the original METEOR paper while the official script follows the Meteor 1.3 paper which made some changes to improve the original version. \\n\\nEven though **the two versions of METEOR are both valid and effective**, to follow most previous natural language generation papers, we decided to recompute the METEOR scores for all methods on the CCSD benchmark using the official Meteor 1.5 script.\", \"we_have_done_the_following_things_to_address_this_issue\": \"We have recomputed the METEOR scores using the official Meteor 1.5 script for **all baselines and our methods on the CCSD benchmark**, and updated the corresponding numbers in the manuscript. Please be informed that **this change did NOT make any difference to the experimental conclusion** because in our experiments, we computed the METEOR scores for all baselines and our methods with the same script, even though the absolute values of their METEOR scores changed with the new evaluation script, the relative relations of these values did not change. And our model still achieved the state-of-the-art performance on the CCSD benchmark. For example, on the CCSD benchmark, our proposed model still outperformed the existing state-of-the-art method by **1.24** in terms of the updated METEOR scores. \\nWe have double-checked the results of all the other evaluation metrics carefully to ensure their correctness.\", \"references\": [\"NLTK version of METEOR: https://www.nltk.org/_modules/nltk/translate/meteor_score.html\", \"Official Meteor 1.5 script: https://www.cs.cmu.edu/~alavie/METEOR/README.html\", \"Original METEOR paper: http://www.cs.cmu.edu/~alavie/METEOR/pdf/Lavie-Agarwal-2007-METEOR.pdf\", \"Meteor 1.3 paper: http://www.cs.cmu.edu/~alavie/METEOR/pdf/meteor-wmt11.pdf\"]}",
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper proposes an interesting method for combining retrieval-based models and graph neural networks for source code summarization. Finding new ways of bringing in additional context for graph-based models is an important research direction in this space, and the paper presents a novel and effective approach. The initial submission was missing experiments on existing benchmarks, but new experiments presented in the discussion phase are enough to resolve that concern. Reviewers are unanimously in support of acceptance.\"}",
"{\"title\": \"Common Response-Summary of Updated Version\", \"comment\": \"Thank all reviewers for your suggestions and comments. We have submitted an updated version of the paper and included additional experiments.\\n\\n1 ) We have added an evaluation on existing benchmarks according to Reviewer 1\\u2019s, Reviewer 3\\u2019s and Reviewer 4\\u2019s comments.\\n\\n2 ) We have added an ablation study on Retrieval-based Augmentation according to Reviewer 1\\u2019s and Reviewer 4\\u2019s comments.\\n\\n3 ) We have provided the mathematical formulation of \\u201cStep 3: Retrieved Summary-based Augmentation\\u201d according to Reviewer 1\\u2019s comments.\"}",
"{\"title\": \"Common Response-Evaluation on existing benchmarks and Ablation study on Retrieval-based Augmentation\", \"comment\": \"We thank all reviewers for the insightful and valuable comments!\\n\\nQ1 (R1, R3, R4): Evaluation on existing benchmarks.\", \"a1\": \"We conducted additional experiments on a public dataset, i.e., the Python Code Summarization Dataset (PCSD), which was also used in Rencos (the most competitive baseline in our paper). The total number of code samples in PCSD is 109,726[1]. This number is comparable to the size (i.e., ~95k) of our own CCSD benchmark.\", \"setting\": \"We follow the setting of Rencos and split PCSD into the training set, validation set and testing set with fractions of 60%, 20% and 20%. We construct the static graph and compare our methods on PCSD against various competitive baselines, i.e., NNGen, CodeNN, Rencos and Transformer, which are either retrieval-based, generation-based or hybrid methods.\", \"results\": \"The results are shown as below. We can see that our method outperforms NNGen, CODENN, Rencos and Transformer by 0.95, 3.27 and 1.12 in terms of BLEU-4, ROUGE-L and METEOR. We also perform the ablation study on PCSD to demonstrate the usefulness of the static graph (i.e., HGNN w/o dynamic) and dynamic graph (i.e., HGNN w/o static). The results also demonstrate that both the static graph and the dynamic graph can contribute to our framework. In summary, the results on both our released benchmark (C benchmark) and existing benchmark (PCSD) demonstrate the effectiveness of our method.\\n\\nMethods \\\\ Metrics | BLEU-4 | ROUGE-L | METEOR \\n\\nNNGen | 21.60 | 31.61 | 15.96 \\n\\nCODE-NN| 16.39 | 28.99 | 13.68 \\n\\nTransformer| 17.06 | 31.16 | 14.37\\n\\nRencos | 22.24 | 36.00 | 18.26 \\n\\nHGNN w/o static | 21.82 | 38.61 | 18.36\\n\\nHGNN w/o dynamic | 21.75 | 38.37 | 18.42\\n\\nHGNN | 23.19 | 39.27 | 19.38\\n\\n\\nQ2 (R1, R4): Ablation study on Retrieval-based Augmentation.\", \"a2\": \"We follow the suggestion and add the experiments to evaluate the impact of the code-based augmentation and summary-based augmentation on our CCSD dataset. We show the results in the in-domain dataset, out-of-domain dataset and the mix of in-domain and out-of-domain dataset as below.\\n\\nOverall, we found that: retrieval-augmented mechanism significantly contributed to the overall model performance (HGNN vs. HGNN w/o augment). More specifically, we noticed that summary-based augmentation has the most impact (HGNN vs. HGNN w/o summary augment). Besides, considering both summary and code augmentation further significantly improved the performance compared to considering only summary augmentation (HGNN vs. HGNN w/o code augment). The summary-based augmentation is more useful, we conjecture that it depends on the specific task: 1) this task is to generate summary and 2) the code and summary are heterogeneous data. Thus, summary-based augmentation could provide a more direct signal for generating better summaries. However, the code-based augmentation could further improve the performance by enhancing the semantic learning of the program. Combining them together, our method achieves the best result.\", \"in_domain\": \"Methods \\\\ Metrics | BLEU-4 | ROUGE-L | METEOR \\n\\nHGNN w/o augment | 12.43 | 30.05 | 25.75 \\n\\nHGNN w/o summary augment | 13.37 | 30.36 | 26.13 \\n\\nHGNN w/o code augment | 15.10 | 32.19 | 27.83\\n\\nHGNN | 16.24 | 33.62 | 29.60\", \"out_of__domain\": \"Methods \\\\ Metrics | BLEU-4 | ROUGE-L | METEOR \\n\\nHGNN w/o augment | 5.56 | 22.64 | 18.27 \\n\\nHGNN w/o summary augment | 5.81 | 22.97 | 19.05\\n\\nHGNN w/o code augment | 6.94 | 23.80 | 20.44\\n\\nHGNN | 7.62 | 24.77 | 20.78\", \"overall\": \"Methods \\\\ Metrics | BLEU-4 | ROUGE-L | METEOR \\n\\nHGNN w/o augment | 9.87 | 27.04 | 23.16 \\n\\nHGNN w/o summary augment | 10.34 | 27.43 | 23.82\\n\\nHGNN w/o code augment | 12.01 | 28.79 | 24.93\\n\\nHGNN | 13.39 | 30.23 | 26.22\\n\\n\\n[1]. A Parallel Corpus of Python Functions and Documentation Strings for Automated Code Documentation and Code Generation. Barone et al. IJCNLP(2) 2017.\"}",
"{\"title\": \"Author response to Review #3-Part 2\", \"comment\": \"Due to the word limit for each comment, we add the response to the remaining comments.\", \"q4\": \"Can you provide more details about the human evaluation? What is the agreement value?\", \"a4\": \"We asked 15 volunteers to evaluate the similarity and the relevance (scoring form 1-5) between the generated summary and the ground truth for each test example. We further calculated the average ratings of the 15 volunteers. The standard deviation of the similarity and relevance scores are 0.38 and 0.30, which demonstrates that the volunteers have a high agreement.\", \"q5\": \"z is the similarity score, which is introduced to weaken the negative impact of c\\u2032 on the original training data c, how do you find the best c?\", \"a5\": \"The reviewer might ask how to select the best c\\u2019, i.e., the retrieved source code, based on the original source code c. Actually, z is the text similarity score (i.e., z=sim(c,c\\u2019)), please see sim(c,c\\u2019) in step 1. Note that for each training data c, we will select the best c\\u2019, where c\\u2019 is the candidate (in D\\u2019) that has the highest similarity score with c (i.e., the largest z). We will add a more clear description in the revision.\", \"q6\": \"When you run baselines on your dataset, did you do a hyper-parameter search or just use their default setting (especially the Rencos model)?\", \"a6\": \"We did tune hyperparameters for baseline methods in our experiments. Specifically, we customize the max length of the input and output based on our CCSD. Furthermore, we did a hyper-parameter search including embedding size, learning rate of the baselines to select the best configuration for each baseline for a fair comparison.\", \"q7\": \"There are some retrieved-augment language models in the NLP field that the authors may want to take a look and compare with, for example, \\\"Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks\\\".\", \"a7\": \"Thanks for pointing out the related paper. Actually, there are two main differences: 1) the applications are different. We focus on code summarization that requires the better semantic learning of programs (e.g., how to better learn semantics from the existing structures such as AST, PDG ), throwing different challenges with NLP tasks and 2) Since the gap between the code and summary, focus on the problem, we propose a novel retrieval augment mechanism to combine the retrieved information into GNN-based generation model for summary generation, which is different from the mentioned paper, combining a pre-trained retriever (Query Encoder and Document Index) with a pre-trained encoder-decoder (generator) for knowledge-intensive tasks.\", \"q8\": \"Did you run any baselines that are based on pre-trained language model, such as BERT, BART, T5, or even more code related like CodeBERT: A Pre-Trained Model for Programming and Natural Languages?\", \"a8\": \"We added new experiments of CodeBert. Based on the released pre-trained model [1], we finetune the model for the code summarization task on CCSD. The experimental results on CCSD (Overall test set) are shown below. Compared with the large-scale pre-trained model CodeBert, HGNN achieves a comparable performance. Notably, we argue that this is not a fair comparison since CodeBert employs 6 programming languages i.e., Java, JavaScript, PHP, Python, Go and Ruby, with a total of 2,137,293 samples for pretraining. In addition, much more GPU resources (16 V100 GPUs) are required to train CodeBert (10 hours per epoch). Differently, we did not do any kind of pretraining on large datasets, and our model can be quickly trained on one GPU (6 minutes per epoch). In summary, the problem settings and resource requirements of the two works are quite different. However, our approach can still achieve a competitive performance, demonstrating its effectiveness in the code summarization task.\\n\\nMethods \\\\ Metrics | BLEU-4 | ROUGE-L | METEOR \\n\\nCodeBert | 10.56 | 29.38 | 32.99\\n\\nHGNN | 13.39 | 30.23 | 26.22\\n\\n[1]. CodeXGLUE https://github.com/microsoft/CodeXGLUE\"}",
"{\"title\": \"Author response to Review #3-Part 1\", \"comment\": \"Please refer to A1 in the common responses for the evaluation on a public dataset. Besides, we thank the reviewer on other detailed comments and we addressed these comments as follows:\", \"q1\": \"Attention-based dynamic message passing of graph model is not proposed in this paper, and I don't think it is necessary to have this \\\"hybrid\\\" design, maybe only dynamic one is enough. As shown in Table 1, the dynamic is more important than static (Although we do see an overall performance, except the out-of-domain meteor score, using both are still better, but it is marginal).\", \"a1\": \"We agree that the idea of using the dynamic graph with GNNs is not our contribution since prior works have explored this idea as well. Our contributions mainly include: 1) the retrieval-based augmentation for generation models and 2) the Hybrid GNN leveraging both static and dynamic graphs. We will make the statement on contributions more clear in the revision. We also added more ablation study results for the retrieval-based augmentation. Please refer to A2 in the common response.\\n\\nAs for the Hybrid GNN, although overall the dynamic graph performs better than the static graph in our experiments, we think the static graph is still needed and useful: 1) Considering the complexity of this task and size of the testing set (6388 samples in CCSD and 21028 in PCSD), we think the performance improvement of HGNN (i.e., using both static and dynamic graphs) compared with HGNN w/o static and HGNN w/o dynamic is still promising. 2) There are still some results showing that the static graph achieves better performance than the dynamic graph. Please see the results a) BLUE-4 values (i.e., 12.00 vs 11.87) between (HGNN w/o augment & dynamic) and (HGNN w/o augment & static) and b) The METEOR results in the new added python results in Q1 of common response, i.e., the METEOR values of HGNN w/o dynamic and HGNN w/o static are 18.42 and 18.36, respectively. 3) Moreover, we think GNN with hybrid static and dynamic graphs is a promising idea in general, and we believe researchers working on other domain applications might find it interesting and helpful.\", \"q2\": \"Can you provide more dataset information? For example, the average lines of the code, the average length of the natural language summarization. It seems to me from the examples that each example only has few lines of code and the summarization is very short, it is more like a topic modeling task.\", \"a2\": \"For our C dataset, the average lines of the code are 12.59 and the average token length of summary is 8.22. For the new added dataset PCSD, the average lines of the code are 14.24 and the average token length is 10.91. Although the code summarization task and the topic modeling task share some kind of similarity, we think they are still quite different. First of all, the summarization task usually aims to provide a more fine-grained description of the input data while topic modeling aims to provide some high-level description (e.g., keywords) of the themes of input data. Secondly, different from topic modeling which usually takes as input free-form text and outputs a few keywords to describe its themes, the key challenge of code summarization compared to regular text summarization is that the input (i.e., source code) and the expected output (i.e., summary) are from two very different domains, i.e., they are heterogeneous data. To solve this challenge, the state-of-the-art techniques adopt the deep neural networks to learn the code semantics and generate the summary. Our work follows this line of research and proposes a novel method, i.e., the retrieval-augmented hybrid GNN.\", \"q3\": \"To my understanding, this work is not the first work combining retrieval solution with generation model for code summarization (e.g., Retrieval-based Neural Source Code Summarization) so please modify some of the claims in the paper.\", \"a3\": \"Thanks for the suggestions! We agree that this is not the first work that proposes the retrieval-generation method for code summarization. And we did cite the Retrieval-based Neural Source Code Summarization work (which is the Rencos baseline in our experiments) in our paper. In fact, different from Rencos, which feeds the combination of the retrieved code and the test code to a seq2seq model, we propose a novel retrieval augment mechanism to employ the similar code and its summary for model training and encode more program semantics with GNN for the summary generation. We will make our claim more clear and add more discussions on the differences in the revision.\"}",
"{\"title\": \"Author response to Review #4\", \"comment\": \"We thank the reviewer again for the useful comments.\\nPlease refer to A1 and A2 in the common responses for the evaluation results on a public dataset and the ablation study with only code-based augmentation and only summary-based augmentation.\"}",
"{\"title\": \"Author response to Review #1\", \"comment\": \"Please see the evaluation results on other dataset and the ablation study on Retrieval-based Augmentation in the common response.\", \"other_questions_and_comments\": \"\", \"we_thank_the_reviewer_for_the_detailed_comments_again_and_we_address_them_in_the_revision_as_follows\": \"\", \"q1\": \"Would be nice to provide the mathematical formulation of \\u201cStep 3: Retrieved Summary-based Augmentation\\u201d\", \"a1\": \"We provide the formula as follows in the revision:\\n\\nWe further encode the retrieved summary $s'$ with another BiLSTM model. We represent each token $t'_i$ of $s'$ using the learned embedding matrix $\\\\boldsymbol E^{seqtoken}$. Then $s'$ can be encoded as:\\n\\n\\n\\\\begin{equation}\\n\\\\boldsymbol h_{t_1'},...,\\\\boldsymbol h_{t_T'} = \\\\mathrm{BiLSTM}(E^{seqtoken}_{t_1'} ,..., E^{seqtoken}_{t_T'})\\n\\\\end{equation}\\n\\n\\nwhere $ h_{t'_i}$ is the state of the BiLSTM model for the token $t_i'$ in $s'$ and $T$ is the length of $s'$. We also multiply the similarity score $z$ to $[\\\\boldsymbol h_{t_1'},...,\\\\boldsymbol h_{t_T'}]$ and concatenate with the graph encoding results (i.e., the outputs of the GNN encoder) as the input $[\\\\mathrm{GNN}, z \\\\boldsymbol h_{t_1'},...,z \\\\boldsymbol h_{t_T'}]$ to the decoder.\", \"q2\": \"Suggestion: it would be helpful to provide an input-output example earlier in the paper.\", \"a2\": \"We will add an illustrative example (including the input and the expected summary results) in our paper such that the readers could better understand our method.\", \"q3\": \"Suggestion: would be nice to include real examples of retrieval results in the analysis section.\", \"a3\": \"We will follow the suggestion and add concrete cases for illustrating the retrieval results in our revision. For example, one concrete example (Example 2 in Table 3) is shown as follows:\", \"input_code\": \"void ReleaseCedar(CEDAR *c) {\\n \\n if (c == NULL)\\n {\\n return;\\n }\\n\\n if (Release(c->ref) == 0)\\n {\\n CleanupCedar(c);\\n }\\n}\", \"ground_truth\": \"release reference of the cedar.\", \"retrieved_code\": \"void DelConnection(CEDAR *cedar, CONNECTION *c)\\n\\n{\\n\\n\\tif (cedar == NULL || c == NULL)\\n\\n\\t{\\n\\n\\t\\treturn;\\n\\n\\t}\\n\\n\\tLockList(cedar->ConnectionList);\\n\\n\\t{\\n\\n\\t\\tDebug(\\\"Connection %s Deleted from Cedar.\\\\n\\\", c->Name);\\n\\n\\t\\tif (Delete(cedar->ConnectionList, c))\\n\\n\\t\\t{\\n\\n\\t\\t\\tReleaseConnection(c);\\n\\n\\t\\t}\\n\\n\\t}\\n\\n\\tUnlockList(cedar->ConnectionList);\\n\\n}\", \"retrieved_summary\": \"delete connection from cedar.\\n\\nOur result (HGNN): release reference of cedar.\", \"we_conjecture_that_the_reason_our_method_can_generate_a_high_quality_summary_for_this_example_could_probably_be_that\": \"With the retrieved summary and code, our method might be able to learn the mapping pattern between the method name and summary. For example, DelConnection (CEDAR* cedar, CONNECTION *c) -> \\u201cdelete connection from cedar\\u201c. Then for this test example ReleaseCedar(CEDAR *c), our method might learn to leverage this pattern and get the correct result.\", \"q4\": \"Have you considered using top-k retrieval results instead of top-1?\", \"a4\": \"Thanks for this great suggestion! In fact, we also explored the top-2 and top-3 retrieval results with our method. However, there was no significant performance boost when more retrieval results were used. Moreover, more GPU resources were needed for expensive training. Thus, we only consider the top-1 retrieval result in this work. We will add more discussion on the effect of different top-k retrieval results in the revision. And we will leave how to effectively utilize top-k retrieval results in our hybrid framework as future work.\"}",
"{\"title\": \"Retrieval-augmented code summarization model with state-of-the-art results on a newly curated dataset\", \"review\": [\"Summary\", \"This paper proposes a retrieval-augmented method for generating code summarization. The model encodes the input code based on its graph structure (Code Property Graph) with a hybrid GNN architecture. The model augments the initial graph representation of the input code based on the representation of the top-1 retrieval result. It also augments the final graph encoding with the BiLSTM encoding of the retrieved summary.\", \"The proposed model is evaluated on a newly curated C code summarization data set and shows state of the art performance compared against previous systems.\", \"Strengths\", \"Releases a new C code summarization data, which will be beneficial for the community.\", \"This paper reports human evaluation results.\", \"Ablation study shows the retrieval augmentation and the new hybrid GNN architecture is helpful.\", \"Weaknesses\", \"The model is not evaluated on any existing code summarization benchmarks. Showing that the proposed architecture is generally applicable by getting good results on more benchmarks will make the story a lot more convincing.\", \"Could include more analysis (see below for details)\", \"Other questions/comments\", \"Would be nice to provide the mathematical formulation of \\u201cStep 3: Retrieved Summary-based Augmentation\\u201d\", \"Would it be possible to ablate code-based summarization vs. summary-based augmentation separately? It would be interesting to see their relative impact.\", \"Suggestion: it would be helpful to provide an input-output example earlier in the paper.\", \"Suggestion: would be nice to include real examples of retrieval results in the analysis section.\", \"Have you considered using top-k retrieval results instead of top-1?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This paper leverages similar codes to help generate code summarization, and an attention-based dynamic graph model is introduced to further capture the global graph information.\", \"review\": \"Summary:\\n\\nThis paper leverages similar code-summary pairs from existing data to assist code summary generation. The model first retrieves a similar code snippet from the existing database. Then, the author applied GNN over the code property graphs (CPGs). A challenge is that CPGs are typically deep therefore it is difficult to capture long dependencies. The author proposed an attention mechanism to capture global information between nodes, and then a hybrid GNN layer encodes the retrieve-augmented graph. Finally, a generator takes both GNN's output and the retrieved text summary and predict outputs. Experimental results over a new C code indicates that the proposed method outperforms both IR and neural generation methods.\\n\\n########################################\", \"reason_for_score\": \"Overall, I vote for accepting. Both the idea of leveraging existing code and also the adaptive layer to capture long dependencies are interesting and the experiments look solid. Although I would still like to see the results from previous existing datasets.\\n\\n########################################\", \"some_comments_about_the_experiments\": \"a. As an application study, it is still necessary to compare the model over previous benchmarks, even though there are some issues with those datasets. \\n\\nb. A pair of missing ablation studies are: a generator still takes the text summary of retrieved code, but not use the augmented graph; and vice versa, the generator only takes the graph information but not the retrieved text summary. This can further indicate which part of the retrieved information is more useful.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Good Results and More Evaluation Needed\", \"review\": \"\", \"overview\": \"The authors tackle the code summarization problem. They retrieve the most similar code and use it as additional input features. They also using GNN to get features from static and dynamic graphs. They evaluate their results on their collected C projects (C-Code-Summarization Benchmark) with 1-2% improvement on automatic evaluation.\", \"reasons_to_accept\": [\"They collect and release a new challenging C benchmark for code summarization.\", \"They propose a hybrid-GNN solution to capture global graph information.\"], \"reasons_to_reject\": [\"No evaluation on any publicly available datasets. Even though there are some issues about duplication, I will still expect to see such a comparison with other baselines.\", \"Attention-based dynamic message passing of graph model is not proposed in this paper, and I don't think it is necessary to have this \\\"hybrid\\\" design, maybe only dynamic one is enough. As shown in Table 1, the dynamic is more important than static (Although we do see an overall performance, except the out-of-domain meteor score, using both are still better, but it is marginal).\", \"Questions & Suggestions:\", \"Can you provide more dataset information? For example, the average lines of the code, the average length of the natural language summarization. It seems to me from the examples that each example only has few lines of code and the summarization is very short, it is more like a topic modeling task.\", \"To my understanding, this work is not the first work combining retrieval solution with generation model for code summarization (e.g., Retrieval-based Neural Source Code Summarization) so please modify some of the claims in the paper.\", \"Can you provide more details about the human evaluation? What is the agreement value?\", \"\\\"z is the similarity score, which is introduced to weaken the negative impact of c\\u2032 on the original training data c\\\", how do you find the best c?\", \"When you run baselines on your dataset, did you do a hyper-parameter search or just use their default setting (especially the Rencos model)?\", \"There are some retrieved-augment language models in the NLP field that the authors may want to take a look and compare with, for example, \\\"Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks\\\".\", \"Did you run any baselines that are based on pre-trained language model, such as BERT, BART, T5, or even more code related like CodeBERT: A Pre-Trained Model for Programming and Natural Languages?\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
Hf3qXoiNkR | Learning from others' mistakes: Avoiding dataset biases without modeling them | [
"Victor Sanh",
"Thomas Wolf",
"Yonatan Belinkov",
"Alexander M Rush"
] | State-of-the-art natural language processing (NLP) models often learn to model dataset biases and surface form correlations instead of features that target the intended underlying task. Previous work has demonstrated effective methods to circumvent these issues when knowledge of the bias is available. We consider cases where the bias issues may not be explicitly identified, and show a method for training models that learn to ignore these problematic correlations. Our approach relies on the observation that models with limited capacity primarily learn to exploit biases in the dataset. We can leverage the errors of such limited capacity models to train a more robust model in a product of experts, thus bypassing the need to hand-craft a biased model. We show the effectiveness of this method to retain improvements in out-of-distribution settings even if no particular bias is targeted by the biased model. | [
"dataset bias",
"product of experts",
"natural language processing"
] | Accept (Poster) | https://openreview.net/pdf?id=Hf3qXoiNkR | https://openreview.net/forum?id=Hf3qXoiNkR | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"DszXX7DVNB",
"PRaAGtr8T5",
"FHW0HGdP_ug",
"yjaciOjMThu",
"ByqhpaJM6LK",
"KD9FetJ_7U",
"aQtUGAxgGt9",
"vr_Y5sIqJFU",
"MemGGWWCt_",
"QPdOXZwflxg",
"lDmhAEKBpM",
"EM6nENB9ut8"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_comment",
"official_review"
],
"note_created": [
1610040418283,
1606196295928,
1606195933773,
1606195894339,
1606195615839,
1606195436655,
1606195408872,
1604644128873,
1603982935090,
1603892938345,
1603640857559,
1603015971636
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3718/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3718/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3718/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3718/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3718/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3718/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3718/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3718/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3718/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3718/Area_Chair1"
],
[
"ICLR.cc/2021/Conference/Paper3718/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper considers the problem of learning models for NLP tasks that are less reliant on artifacts and other dataset-specific features that are unlikely to be reliable for new datasets. This is an important problem because these biases limit out-of-distribution generalization. Prior work has considered models that explicitly factor out known biases. This work proposes using an ensemble of weak learners to implicitly identify some of these biases and train a more robust model. The work shows that weak learners can capture some of the same biases that humans identify, and that the resulting trained model is significantly more robust on adversarially designed challenge tasks while sacrificing little accuracy on the test sets of the original data sets.\\n\\nThe paper's method is useful, straightforward, and intuitively appealing. The experiments are generally well conducted. Some of the reviewers raised questions about evaluating on tasks with unknown biases. The authors addressed these concerns in discussion and we encourage them to include this in the final version of the paper using the additional page.\"}",
"{\"title\": \"Response to Reviewer1\", \"comment\": \"We want to sincerely thank you for your comments. We are encouraged to see that this area of research is very active and that multiple concurrent works are proposed independently.\", \"we_want_to_address_to_your_comments\": \"1. **The authors argue they have shown the model with limited capacity capture biases. However, this has been shown already**\\n\\nIn [1], the authors also look at models with low capacity to discover examples that support the heuristics. However, they use a different definition of \\\"hard examples\\\": forgotten examples. From [1], \\\"an example is forgotten if it goes from being correctly to incorrectly classified (because of multiple gradient updates performed on other examples).\\\" It is similar to the definition used in Data Cartography [3] for which we show in Appendix A6 that the groups they identified strongly overlap with our groups whose definition of hardness is based on the pair (final loss, final uncertainty). Our results show that we do not need to track the whole fine-tuning of our low-capacity weak learner and yet can produce similar improvements. \\n\\nFrom a practical point of view, we can imagine a setup where one could use a publicly released fine-tuned checkpoint (such as the ones found on huggingface.co/models) and use it as (an already trained) weak learner.\\n\\n2. **The main method proposed in this paper, is exactly the same method proposed in [2].**\\n\\nWe can now see that [2] was available on OpenReview mid-July 2020 as an anonymous pre-print. We became aware of it when it was posted to arxiv September 25th, 2020 (it was finally published in EMNLP2020\\u2019s proceedings on November 9th, 2020). Upon becoming aware of this work as preparing for this submission (October 2nd, 2020), we mentioned it explicitly in our manuscript and added its results. We believe this was following proper research protocol, and think that our work should be evaluated independently from the concurrent work of [2]. \\n\\nWe also want to highlight a core difference with this work. In [2], the authors use the same high-capacity model (BERT-base) for both the weak model and the main model. They \\u201ccontrol\\u201d the weak learner by only presenting a tiny fraction of the data (for instance, 2\\u2019000 examples for MNLI randomly sampled among the 392\\u2019000 training examples). In contrast, we \\u201ccontrol\\u201d the weak learner by limiting its capacity (number of parameters) but fine-tune it on the whole training dataset. We argue that the method used to \\u201ccontrol\\u201d the weak learner in [2] can have drawbacks. Namely, it fails to leverage a significant proportion of the signal present in the training set.\\n\\nTo fairly compare the two setups, we design an extreme experiment: we adversarially sample the 2K examples fed to the weak learner by training a hypothesis-only classifier and selecting the examples this classifier can\\u2019t correctly classify. Intuitively, these examples are harder to classify because the bias is not (or less) present.\\nWe observe that the generalization decreases compared to a randomly sampled set of 2k examples at no cost of performance on in-domain inputs. This suggests that there are unfortunate small subsamples of (hard) examples that provide less debiasing ability to the product of experts setup. Intuitively, by training on a set of hard and less biased examples, the weak model learns a stronger explanation for the data which does not rely solely on superficial biases, which characterizes the uncertain/incorrect group, which play a crucial role in debiasing the main model.\\n\\n| Main model=BERT-base, Weak model=BERT-base | | |\\n|--|--|--|\\n| Weak model's training set | MNLI matched (acc.) | HANS Ent (acc.)| HANS Non-Ent (acc.) |\\n| Random 2K examples | 84.32 | 98.21 | 18.51 |\\n| Adv 2K examples (highest loss for hypothesis-only classifier)| 84.41 | 98.85 | 14.42 |\\n\\n3. **Though the method in [1] is different, the discussion in that paper still would apply here as well.**\\n\\nWe have made it more clear that our results in Section 5.2 corroborate and complement insights from [1].\\n\\n[3] Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics\\\\\\nSwabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, Yejin Choi\\\\\\nEMNLP 2020\"}",
"{\"title\": \"Response to Reviewer4 (2/2)\", \"comment\": \"Regarding the amount of bias, our intuition is the following: in the worst-case scenario, no bias is picked up by the weak learner (it can happen for instance when the bias is present in a tiny fraction of the data). However, Swayamdipta et al. [2020] shows on a range of different dataset that the category \\u201chard-to-learn\\u201d examples is always populated. In Appendix A6, we detail the connection between the data maps and our categories and highlight that the \\u201chard-to-learn\\u201d examples strongly overlap with our \\u201ccertain/incorrect\\u201d category. It means that even though the weak learner is not picking up biases, there are still examples in the \\u201ccertain/incorrect\\u201d examples which correspond to the hard examples which are upsampled in our PoE training.\\n\\n[Swayamdipta et al., 2020]\\\\\", \"dataset_cartography\": \"Mapping and Diagnosing Datasets with Training Dynamics\\\\\\nSwabha Swayamdipta, Roy Schwartz, Nicholas Lourie, Yizhong Wang, Hannaneh Hajishirzi, Noah A. Smith, Yejin Choi\\\\\\nEMNLP 2020\"}",
"{\"title\": \"Response to Reviewer4 (1/2)\", \"comment\": \"We want to sincerely thank you for taking the time to carefully read our submission and providing such detailed suggestions. We have updated our submission by addressing your comments.\\n\\n**Is there a specific reason why you decided to focus on NLP only?**\\n\\nOur group expertise is in NLP, and it is an area where these biases have been of particular concern. In theory our method is general, but we make no claims to its empirical impact in other domains. \\n\\n**You could improve the impact of your approach by citing papers that tackle the same problem with similar solutions from different fields.**\\n\\nThis is a good point. We will add a reference to other works, particularly Cadene et al [2019].\\n\\n**Next to Eq1: Why an element wise sum is equivalent to an element wise multiplication after softmax? It seems wrong to me.**\\n\\nWe apologize for the typo. The corrected equation is $softmax(e) \\\\propto softmax(w) \\\\odot softmax(m)$ you have noted.\\n\\n**How to choose the number of parameters of your weak learner?**\\n\\nWe took the smallest BERT model publicly available (TinyBERT with 4.4 million parameters) and did not tune that aspect of the work. Our experiments show that the weaker the pretrained model is, the more common it is to produce \\u201ccertain but incorrect\\u201d responses (Section 5.2). This category gives most of the signals for debiased training.\\n\\nOther works [Clark et al., 2019; He et al., 2019] suggest that shallow classifiers on top FastText/Glove representations may also lead to good results.\\n\\n**I still don't understand what is the learning method that you propose PoE or PoE+CE? What to choose between PoE and PoE+CE?**\\n\\nWe found that PoE+CE loss controls the balance between the features from the dataset (superficial cues or not) and the signal from the weak learner. This is similar to how in Distillation it is common to use a mixture of Distill+CE. This indicates that not all the information picked up by the weak model should be discarded.\\n\\n**And most critically, if you don't assess the type of biases and the amount of biases included in the dataset, how to be sure that your method will have a beneficial impact? Then, if you need to assess the type of biases, using another method that specifically targets them could be more efficient.**\\n\\n*Start - Copying part of the response to Reviewer3*\\n\\nThis is an interesting point. While it is difficult to enumerate all sources of bias, we focus on \\u201csuperficial cues\\u201d that correlate with the label in the training set but do not transfer. These superficial cues are sufficiently apparent to be captured by a shallow neural network. In a sense, the biases we are targeting are defined by the weakness of the model. For instance, Conneau et al., [2018] suggest that word presence can be detected with very shallow networks (linear classifier on top of FastText bag of words): Table 2 shows very high accuracy for \\u201cWord Content\\u201d, the probing task of detecting which of the 1\\u2019000 target words is present in a given sentence.\\n\\nTo verify that a weak model is still effective with \\u201chard to detect\\u201d biases, we consider an example where the bias is only present in a small portion of the training (while the rest of the examples either contradict the bias or are neutral towards this bias). This remaining bias is difficult to spot but should be able to be captured by the weak model. We remove from the MNLI training set all the examples that exhibit one of the two biases detailed in Section 4.1 (high word overlap between premise and hypothesis & entailment; and negation in the hypothesis & contradiction). We end up with 268K examples which present a \\u201clow-amount\\u201d of bias. \\n\\nWe apply our debiasing method with these 268K examples as our training set. For comparison, we train a main model with standard cross-entropy on a set of 268K randomly selected examples. Our results confirm on HANS that our debiasing method is still effective even when the bias is hard to detect.\\n\\n\\n| Training data | Main Model | Weak Model | Loss | MNLI matched (acc.) | HANS Ent (acc.) | HANS Non-Ent (acc.) |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| Adversarially selected 268K examples | BERT-base | TinyBERT | PoE | 83.3 | 92.41 | 38.74 |\\n| Randomly selected 268K examples | BERT-base | \\u2205 |CE | 84.01 | 98.15 | 16.50 |\\n\\n*End - Copying part of the response to Reviewer3*\"}",
"{\"title\": \"Response to Reviewer3\", \"comment\": \"We thank you for your sincere comments.\\n\\n**I'd imagine knowing how weak the weak learner needs to be requires some intuition about which biases you are trying to remove**\\n\\nThis is an interesting point. While it is difficult to enumerate all sources of bias, we focus on \\u201csuperficial cues\\u201d that correlate with the label in the training set but do not transfer. These superficial cues are sufficiently apparent to be captured by a shallow neural network. In a sense, the biases we are targeting are defined by the weakness of the model. For instance, Conneau et al., [2018] suggest that word presence can be detected with very shallow networks (linear classifier on top of FastText bag of words): Table 2 shows very high accuracy for \\u201cWord Content\\u201d, the probing task of detecting which of the 1\\u2019000 target words is present in a given sentence.\\n\\nTo verify that a weak model is still effective with \\u201chard to detect\\u201d biases, we consider an example where the bias is only present in a small portion of the training (while the rest of the examples either contradict the bias or are neutral towards this bias). This remaining bias is difficult to spot but should be able to be captured by the weak model. We remove from the MNLI training set all the examples that exhibit one of the two biases detailed in Section 4.1 (high word overlap between premise and hypothesis & entailment; and negation in the hypothesis & contradiction). We end up with 268K examples which present a \\u201clow-amount\\u201d of bias. \\nWe apply our debiasing method with these 268K examples as our training set. For comparison, we train a main model with standard cross-entropy on a set of 268K randomly selected examples. Our results confirm on HANS that our debiasing method is still effective even when the bias is hard to detect.\\n\\n| Training data | Main Model | Weak Model | Loss | MNLI matched (acc.) | HANS Ent (acc.) | HANS Non-Ent (acc.) |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| Adversarially selected 268K examples | BERT-base | TinyBERT | PoE | 83.3 | 92.41 | 38.74 |\\n| Randomly selected 268K examples | BERT-base | \\u2205 |CE | 84.01 | 98.15 | 16.50 |\\n\\n**For research questions such as this (\\\"is the model using the heuristic?\\\") I always find it unsatisfying to think about performance gains that are in between 0 and 100.**\\n\\nThat is a fair point.\\n\\nFor context, McCoy et al [2019] show that the model can \\u201cget it\\u201d by simply augmenting the training set with heuristic-non-entailed examples (i.e. examples similar to the HANS non-entailed part). With standard fine-tuning, the model gets an almost-perfect accuracy on the HANS non-entailed examples. So we know that for the fixed capacity of BERT-base, we can get to a better minimum (i.e. a minimum that leads to better generalization). Therefore, the question is whether we can optimize the model to reach this minimum without the extra curated data. Intuitively, even though we do not have formal evidence, we can expect that an 80% accuracy reflects the fact the optimization ended in a better generalizing state than when it is at 20% accuracy. Thus, thinking about the 0-100 range as a continuous scale can be helpful (at least maybe outside the ~40-60 range).\\n\\n[Conneau et al., 2018]\\\\\", \"what_you_can_cram_into_a_single_vector\": \"Probing sentence embeddings for linguistic properties\\\\\\nAlexis Conneau, German Kruszewski, Guillaume Lample, Lo\\u00efc Barrault, Marco Baroni\\\\\\nACL 2018\\n\\n[McCoy et al, 2019]\\\\\", \"right_for_the_wrong_reasons\": \"Diagnosing Syntactic Heuristics in Natural Language Inference\\\\\\nTom McCoy, Ellie Pavlick, Tal Linzen\\\\\\nACL 2019\"}",
"{\"title\": \"Response to Reviewer5 (2/2)\", \"comment\": \"4. **I think more experiments would be useful**\\n\\nAdditionally, we have added experiments on one other textual dataset: FEVER [Thorne et al, 2018] using the symmetric challenge set [Schuster et al, 2019]. Our method is again effective at removing potential biases present in training sets. The results are:\\n\\n| FEVER (avg on 6 seeds) | | Loss | Dev set (acc.) | Symmetric Test set (acc.) |\\n|------------------------|------------------------|----------|-------------------|---------------------------|\\n| Reported | Schuster et al. (2019) | CE | **85.85** +/- 0.5 | 57.46 +/- 1.6 |\\n| | Mahabadi et al (2020) | CE | 85.99 | 56.49 |\\n| | Mahabadi et al (2020) | PoE | 84.46 | **66.25** |\\n| | | | | |\\n| Ours | Bert-base-uncased | CE | 85.61 +/- 0.3 | 55.13 +/- 1.5 |\\n| | TinyBERT - W | CE | 69.43 +/- 0.2 | 43.10 +/- 0.2 |\\n| | | | | |\\n| Ours | Bert-base-uncased - M | PoE | 81.97 +/- 0.5 | **59.95** +/- 3.3 |\\n| | Bert-base-uncased - M | PoE + CE | **85.29** +/- 0.6 | 57.86 +/- 1.4 |\\n\\n[Thorne et al, 2018]\\\\\", \"fever\": \"a large-scale dataset for Fact Extraction and VERification]\\\\\\nJames Thorne, Andreas Vlachos, Christos Christodoulopoulos, Arpit Mittal\\\\\\nNAACL 2018\\n\\n[Schuster et al, 2019]\\\\\\nTowards Debiasing Fact Verification Models\\\\\\nTal Schuster, Darsh J Shah, Yun Jie Serene Yeo, Daniel Filizzola, Enrico Santus, Regina Barzilay\\\\\\nEMNLP 2019\"}",
"{\"title\": \"Response to to Reviewer5 (1/2)\", \"comment\": \"1. **Comparison to [Clark et al, 2019] - BiDAF**\\n\\nThe reported results from Clark et al, [2019] do use a modified BiDAF model while we use a BERT model. Our goal in Table 3 was to highlight the drop of performance when evaluating on the adversarial sets as opposed to in-domain sets. Results from Clark et al [2019] give us a different comparative trade-off in adversarial set performance for a near-state-of-the-art class of model.\\n\\n2. **Is it possible to better quantify what useful information is being learned (and subsequently thrown out) by the weak learner?**\\n\\nSince in-domain performance of the weak learner is an imperfect proxy for learned information, we use a transfer benchmark (see Appendix A.4) to evaluate the generalization capabilities of the weak model. We train the weak model on SNLI using standard cross-entropy and evaluate the transfer capabilities on a range of other natural language inference datasets. We found the following results:\\n\\n| Transfer Benchmark | Main Model | Weak Model |\\n| --- | --- | --- |\\n| Test set | BERT-base (acc.) | TinyBERT (acc.) |\\n| AddOne | 86.82 | 72.35 |\\n| DPR | 50.29 | 50.01 |\\n| SPR | 58.27 | 42.89 |\\n| FN+ | 54.35 | 48.18 |\\n| Scitail | 69.71 | 59.27 |\\n| GLUE | 55.34 | 48.55 |\\n\\nThese numbers suggest that the weak model (TinyBERT) is able to learn some useful information but greatly lags behind the main model (BERT-base).\\n\\nIn general, our experiments suggest that the less generalizable the weak model, the more often it is \\u201ccertain but incorrect\\u201d. These misclassifications are most beneficial for training the main model. We, therefore, want a weak learner that focuses on less useful information. \\n\\n3. **it\\u2019s somewhat unclear to me how well this method would translate to a setting with unknown biases [...] I\\u2019d like to see whether this method applies well to tasks where it isn\\u2019t immediately obvious that the bias is easy to learn**\\n\\n*Start - Copying part of the response to Reviewer3*\\n\\nThis is an interesting point. While it is difficult to enumerate all sources of bias, we focus on \\u201csuperficial cues\\u201d that correlate with the label in the training set but do not transfer. These superficial cues are sufficiently apparent to be captured by a shallow neural network. In a sense, the biases we are targeting are defined by the weakness of the model. For instance, Conneau et al., [2018] suggest that word presence can be detected with very shallow networks (linear classifier on top of FastText bag of words): Table 2 shows very high accuracy for \\u201cWord Content\\u201d, the probing task of detecting which of the 1\\u2019000 target words is present in a given sentence.\\n\\nTo verify that a weak model is still effective with \\u201chard to detect\\u201d biases, we consider an example where the bias is only present in a small portion of the training (while the rest of the examples either contradict the bias or are neutral towards this bias). This remaining bias is difficult to spot but should be able to be captured by the weak model. We remove from the MNLI training set all the examples that exhibit one of the two biases detailed in Section 4.1 (high word overlap between premise and hypothesis & entailment; and negation in the hypothesis & contradiction). We end up with 268K examples which present a \\u201clow-amount\\u201d of bias. \\n\\nWe apply our debiasing method with these 268K examples as our training set. For comparison, we train a main model with standard cross-entropy on a set of 268K randomly selected examples. Our results confirm on HANS that our debiasing method is still effective even when the bias is hard to detect.\\n\\n\\n| Training data | Main Model | Weak Model | Loss | MNLI matched (acc.) | HANS Ent (acc.) | HANS Non-Ent (acc.) |\\n| --- | --- | --- | --- | --- | --- | --- |\\n| Adversarially selected 268K examples | BERT-base | TinyBERT | PoE | 83.3 | 92.41 | 38.74 |\\n| Randomly selected 268K examples | BERT-base | \\u2205 |CE | 84.01 | 98.15 | 16.50 |\\n\\n*End - Copying part of the response to Reviewer3*\"}",
"{\"title\": \"Potentially useful extension of prior work\", \"review\": \"*Summary*: This paper proposes a method for training model that are robust to spurious correlations, building upon prior work that uses product-of-experts and a model explicitly trained on a dataset bias (e.g., a hypothesis-only model). Instead of using a model explicitly trained to learn the dataset bias, the authors use a \\u201cweak learner\\u201d with limited capacity. Then, this model is used in the PoE setting as in past work. The advantage of this method is that a model developer doesn\\u2019t need to know that a bias exists, since the hope is that the weak learner will implicitly learn the bias.\\n\\n*Strengths*: A thorough study of using a limited-capacity auxiliary model to train more robust models, which helps a final model ignore spurious correlations that are easy to learn.\\n\\n*Weaknesses*: The work is a rather straightforward extension of prior work. Furthermore, the authors only evaluate on 2 textual tasks---I would have liked to see more experiments with spurious correlations in vision (e.g., VQA or the datasets used in https://openreview.net/forum?id=ryxGuJrFvS), and other experiments on text (e.g., the TriviaQA-CP dataset in the Clark paper). As is, it\\u2019s hard to glean how broadly applicable this method actually is. I would have also liked to see more of a comparison with methods that use known bias (e.g., Clark et al or He et al)---it seems like some of the comparisons in the table aren\\u2019t completely fair.\\n\\n*Recommendation*: 6 . I think this paper is a potentially-useful extension of a prior method, but I\\u2019m still somewhat unconvinced that this method is applicable in settings where the bias is hard to detect, which is what we really care about (since, if the bias is easy to detect, we can use Clark et al and other methods).\", \"comments_and_questions\": \"1. The comparisons to Clark et al aren\\u2019t fair comparisons for adversarial SQuAD, since the Clark et al paper uses a different base model for adversarial SQuAD (modifed BIDAF).\\n\\n2. The weak learner is a rather blunt instrument. It picks up dataset biases, but it also likely picks up features that are actually useful---not all robust features have to be difficult to learn. Is it possible to better quantify what useful information is being learned (and subsequently thrown out) by the weak learner? This would make it easier to determine if using it is worthwhile.\\n\\n3. While it\\u2019s true that the weak model empirically learns to re-learn the same dataset biases targeted in prior work (e.g., negation correlates with contradiction), it\\u2019s somewhat unclear to me how well this method would translate to a setting with unknown biases. The MNLI / SQuAD examples are a bit artificial since we already have knowledge of the bias---it\\u2019s possible that weak learners can pick up on spurious features that are \\u201ceasy to learn\\u201d, which are the same ones that humans notice. I\\u2019d like to see whether this method applies well to tasks where it isn\\u2019t immediately obvious that the bias is easy to learn; perhaps a synthetic experiment would be useful here. Is it possible to modulate the learnability of the bias? The synthetic experiments in the paper suggest that for cases the bias is hard to learn, this method isn\\u2019t very effective, which makes sense---in how many of the cases in the literature is the bias hard to learn? This is another reason why I think more experiments would be useful.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Straightforward method for reducing model's reliance on spurious features\", \"review\": \"Summary:\\n\\nThis paper focuses on the known problem that current NLP models tend to solve tasks by exploiting superficial properties of the training data that do not generalize. For example, in the NLI task, models learn that negation words are indicative of the label \\\"contradiction\\\" and high word overlap is indicative of the label \\\"entailment\\\". There have been many recent solutions proposed for mitigating such behavior, but existing methods have tended to assume knowledge of the specific dataset biases a priori. In this paper, the authors propose a method based on product of experts that doesn't assume particular knowledge of specific dataset biases. The method works by first training a weak model and then training a \\\"main\\\" model using a loss that upweights examples on which the weak model performs poorly (namely, predicts the wrong answer with high confidence). The assumption is that weak models will exploit heuristics, and so this method will deincentivize the main model to use those same heuristics. The authors evaluate on a range of tasks, including a simulated bias setting, and NLI setting, and a QA setting, and offer a fair amount of analysis of their results. In particular, the analysis showing that the weak learners do in fact adopt the biases which have been documented elsewhere in the literature is interesting, and the discussion of \\\"how weak does the weak learner need to be\\\" is appreciated (a few questions on this below).\", \"strengths\": [\"Straightforward method for addressing an important known problem with neural NLP models\", \"Thorough analysis, not just a \\\"method and results\\\" paper\"], \"weaknesses\": [\"Novelty might be somewhat limited, method is not wildly creative (but I don't necessarily think \\\"wild creativity\\\" is a prerequisite for scientific value). The authors do a good job of directly contending with the similar contemporaneous work in their paper\", \"Additional Comments/Questions:\", \"Just a few thoughts that came up while reading...\", \"The weakness-of-weak-learner analysis is interesting. I imagine this is not something that can be understood in absolute terms, i.e., I would not expect there to be some level of weakness that is sufficient for all biases and all datasets. E.g., surely the lexical overlap bias is \\\"harder\\\" to learn than a lexical bias like the presence of negation words, since recognizing lexical overlap presupposes recognizing lexical identity. Therefore, I'd imagine knowing how weak the weak learner needs to be requires some intuition about which biases you are trying to remove, which runs counter to the primary thrust of the paper, namely, removing bias without knowing what the bias is. Thoughts?\", \"Its interesting that even with this the performance on hans non-entailed is still only 56%, which is better but still not exactly good, and doesn't suggest the model has learned the \\\"right\\\" thing so much as its has learned not to use that particular wrong thing. For research questions such as this (\\\"is the model using the heuristic?\\\") I always find it unsatisfying to think about performance gains that are in between 0 and 100. E.g., when we talk about human learning, we usually see an abrupt shift when the learner \\\"gets it\\\", and our hope in removing the spurious features with methods like yours would be that we'd help the neural models similarly \\\"get it\\\" and reach 100% at least on examples that isolate the effect of this spurious feature. I don't expect you to have an answer for this, but just raising to hear your thoughts.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #4\", \"review\": \"## Reason for score\\n\\nThe research problem is critical. The solution is appropriate and novel. The claims are validated. The experiments are interesting.\\nHowever, the writing in section 3, 4 et 5 should be improved. If so, I would be willing to raise my score.\\n\\n## My background\\n\\nMy research is focused on detecting and avoiding data biases (or spurious correlations) learned by deep neural networks. This is the exact scope of this paper. However, my area of expertise is computer vision and multimodal text-image, not natural language processing.\\n\\n## Summary\", \"context\": \"The paper focuses on automatically detecting data biases learned by natural language processing models and overcoming them using a learning strategy.\", \"problem\": \"\", \"the_authors_identify_and_tackle_issues_of_state_of_the_art_methods\": [\"they are required to already know about a certain bias to be overcome.\"], \"solution_and_novelty\": \"The proposed method consists in 1) training a weak model that aims at detecting biases 2) overcoming these biases by training a main model using a product of experts (Hinton, 2002) with the predictions of the fixed weak model.\", \"claim\": [\"A weak model can be used to discover data biases\", \"The proposed method produces a main model that generalize better to out-of-distribution examples\", \"## What I liked the most\", \"meta-problem of automatically detecting and overcoming biases in neural networks is critical\", \"well contextualized\", \"relevant issues of state of the art have been identified\", \"intro and related work are easy to read and understand\", \"novel, simple and interesting method to tackle them\", \"interesting figures\", \"experiments are interesting and well chosen\", \"## What could be improved\", \"1. Abstract, introduction and 2. Related work\", \"Your research problem and solution are general and can be applied to many fields. Is there a specific reason why you decided to focus on NLP only?\", \"You could improve the impact of your approach by citing papers that tackle the same problem with similar solutions from different fields. \\\"Clark et al. 2019 Don\\u2019t take the easy way out: Ensemble-based methods for avoiding known dataset biases\\\" that you already cite ran some experiments in multiple fields (NLP, VQA, etc.). \\\"Cadene et al. Rubi: Reducing unimodal biases for visual question answering (NeurIPS2019)\\\" in VQA could also be cited.\", \"3. Proposed Method\", \"Next to Eq1: Why an element wise sum is equivalent to an element wise multiplication after softmax? It seems wrong to me.\", \"It could be useful to have a general definition of the PoE loss (instead of just an example of binary cross entropy in Eq2)\", \"See 4.3, you should define PoE+CE here.\", \"4. Experiments\", \"Overall, I think it is important that you improve the writing for this section and reduce jargon. It is really difficult to understand for readers that are not familiar with the datasets on which you perform your study. Also it is really difficult to understand which dataset is \\\"in-distribution\\\" or \\\"out-of-distribution\\\".\", \"You don't define \\\"development matched accuracy\\\" before using it.\", \"4.1\", \"You use too many footnotes that could be included in the text.\", \"4.2\", \"You don't define \\\"CE\\\" (even in the caption of Figure2).\", \"In Table 2, you could reduce jargon by using Weak and Main instead of \\\"W\\\" and \\\"M\\\".\", \"In Table 2, you don't define \\\"An.\\\" even in the caption.\", \"4.3\", \"I don't understand why \\\"PoE+CE\\\" is better on \\\"Hard\\\"\", \"I don't like that you propose to use \\\"PoE+CE\\\" as your method of choice \\\"to counteract these effects\\\" without defining it in section 3. To be clear, I still don't understand what is the learning method that you propose PoE or PoE+CE?\", \"5. Analysis\", \"5.2\", \"Title is on two lines instead of one\", \"I don't understand \\\"When trained jointly with the larger MediumBERT weak learner......\\\" How many parameters? Don't expect your reader to look at Figure 4 to obtain this information.\", \"6. Conclusion\", \"Could you add a discussion about the limitations of your approach. In particular: How to choose the number of parameters of your weak learner? What to choose between PoE and PoE+CE? And most critically, if you don't assess the type of biases and the amount of biases included in the dataset, how to be sure that your method will have a beneficial impact? Then, if you need to assess the type of biases, using another method that specifically targets them could be more efficient.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Clarification on ICLR policy\", \"comment\": \"Thank you for your review! I wanted to clarify that ICLR policy is that not citing/comparing with unpublished work, particularly recent work, may be excused. See https://iclr.cc/Conferences/2021/ReviewerGuide\", \"q\": \"Are authors expected to cite and compare with very recent work? What about non peer-reviewed (e.g., ArXiv) papers?\", \"a\": \"We consider papers contemporaneous if they are published within the last two months. That means, since our full paper deadline is Oct 2, if a paper was published on or after Aug 2, 2020, authors are not required to compare their own work to that paper. Authors are encouraged to cite and discuss all relevant papers, but they may be excused for not knowing about papers not published in peer-reviewed conference proceedings or journals.\\n\\n\\n\\nSince [2] was submitted to EMNLP by June 3, and notifications of acceptance were Sept. 14, this work can be considered contemporaneous. Of course, it is helpful to point authors to this work and ask them to cite it, but please consider reviewing the paper on its own merits.\"}",
"{\"title\": \"review\", \"review\": \"Paper summary:\\nThe authors argue that they have proposed a method to train robust models to biases without having prior knowledge of the biases. They argue also to provide analysis on how weak learner capacity impacts the in-domain/out-of-domain performance.\", \"reasons_to_reject\": \"1) The authors argue they have shown the model with limited capacity capture biases. However, this has been shown already in [1] in 2019 and therefore is not a contribution of the authors.\\n2) The main method proposed in this paper, is exactly the same method proposed in [2]. Please note that [2] was already available in early July 2020, and on top of existing work, the paper does not provide other contributions. \\n3) About the third argued contribution on showing how the performance of the debiasing method change based on the capacity of weak learners, in [1], the authors included the discussion between the choice of weak learners on their impact. Though the method in [1] is different, the discussion in that paper still would apply here as well. Please refer to table 1-3 and Figure 1 in [1]. \\n\\nGiven the points above, and since the main method in the paper is proposed in [2], the paper does not provide enough contributions to be suitable for the ICLR venue. \\n\\n[1] Robust Natural Language Inference Models with Example Forgetting, Yaghoobzadeh et al, https://arxiv.org/pdf/1911.03861.pdf, 2019 \\n[2] Towards Debiasing NLU Models from Unknown Biases, Utama et al, 13 July 2020, https://openreview.net/forum?id=UHpxm2K-jHE, EMNLP 2020\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
nVZtXBI6LNn | Fast and Complete: Enabling Complete Neural Network Verification with Rapid and Massively Parallel Incomplete Verifiers | [
"Kaidi Xu",
"Huan Zhang",
"Shiqi Wang",
"Yihan Wang",
"Suman Jana",
"Xue Lin",
"Cho-Jui Hsieh"
] | Formal verification of neural networks (NNs) is a challenging and important problem. Existing efficient complete solvers typically require the branch-and-bound (BaB) process, which splits the problem domain into sub-domains and solves each sub-domain using faster but weaker incomplete verifiers, such as Linear Programming (LP) on linearly relaxed sub-domains. In this paper, we propose to use the backward mode linear relaxation based perturbation analysis (LiRPA) to replace LP during the BaB process, which can be efficiently implemented on the typical machine learning accelerators such as GPUs and TPUs. However, unlike LP, LiRPA when applied naively can produce much weaker bounds and even cannot check certain conflicts of sub-domains during splitting, making the entire procedure incomplete after BaB. To address these challenges, we apply a fast gradient based bound tightening procedure combined with batch splits and the design of minimal usage of LP bound procedure, enabling us to effectively use LiRPA on the accelerator hardware for the challenging complete NN verification problem and significantly outperform LP-based approaches. On a single GPU, we demonstrate an order of magnitude speedup compared to existing LP-based approaches. | [
"neural network verification",
"branch and bound"
] | Accept (Poster) | https://openreview.net/pdf?id=nVZtXBI6LNn | https://openreview.net/forum?id=nVZtXBI6LNn | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"uT1JBijhPp",
"aoE627kAnzi",
"vrQOZOh-sNa",
"yDHLLPkULUl",
"M7XCT3oM1Rr",
"YLf37HnnXU",
"n4MEEDsYRM8",
"RWFtDoxQ_4F",
"eNCfsEEjA2E",
"aKgyoeVSy1l",
"G1RHLSci9Ai",
"XxMeotqZCIZ",
"e1r_3UAoQ0f",
"S6Q_prxDAA_",
"YEMYD0RJdN",
"X74S7dSa3EM",
"qIhmAxlCcGc",
"5eA3cAre7mv"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040352870,
1606281889699,
1606281403020,
1606270770949,
1606261532938,
1606236960761,
1606022752994,
1606022559949,
1605609917490,
1605606920159,
1605606782825,
1605606438297,
1605606114356,
1605448298692,
1604008549199,
1603922678990,
1603851382249,
1603703497165
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3717/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3717/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3717/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3717/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3717/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3717/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3717/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3717/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3717/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3717/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3717/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3717/Authors"
],
[
"~Alessandro_De_Palma1"
],
[
"ICLR.cc/2021/Conference/Paper3717/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3717/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3717/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3717/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"Thank you for your submission to ICLR. As noted, several of the reviewers had fairly low confidence in evaluating this submission. However, based upon the reviewers and commenters who were familiar with this line of work, as well as my own evaluation of the paper, I believe it is clearly worth publishing at ICLR. The proposed method pushes the boundary in methods for exact branch and bound-based verification of neural networks, using clever tricks from existing relaxations. And while the method is still likely to be relegated to relatively small networks for the time being, pushing forward the state of the art in exact verification is still a worthy goal suitable for publication at ICLR. I thus think that the paper is quite clearly above the bar, and should be accepted for publication.\"}",
"{\"title\": \"We have greatly improved the presentation and clarity of our paper, and hope the reviewer can champion our paper\", \"comment\": \"Dear AnonReviewer2,\\n\\nWe would like to thank you again for recognizing our main contributions and providing the very constructive reviews. We have added the additional experiments as you suggested, and also included an ablation study.\\n\\nOur initial submission was unpolished and was hard to understand for non-experts, and probably gave other reviewers a bad impression. However, we made a tremendous effort during the rebuttal period and significantly improved the quality of our paper. We have polished the paper for several rounds and greatly improved writing, clarity and presentation. We believe we have addressed the concerns on the readiness for publication.\\n\\nBecause you are the only expert reviewer who are familiar with this field and understand our main contributions well, we hope you can champion our paper during the next stage of discussions and clarify our contributions to other reviewers who are not very confident. We believe our revised version is significantly better than our initial submission in terms of writing and clarity, while our contributions and main results remain unchanged. Thank you for your help and support!\\n\\nSincerely,\\n\\nAuthors of Paper 3717\"}",
"{\"title\": \"Thank you so much for reading through our revised paper and the encouraging comments\", \"comment\": \"Dear AnonReviewer4,\\n\\nThank you so much for reading through our revised paper! We are very grateful to you for your encouraging comments and we are happy to know that our paper is much better presented and more friendly to non-experts now!\\n\\nDuring the next stage of discussion, we hope you can also let other reviewers know that we have greatly improved our paper. We believe our technical contribution is clear and important, but the low ratings from AnonReviewer1 and AnonReviewer 3 were based on the bad first impression from our unpolished initial submission. We will really appreciate your support for our paper. Thank you!\\n\\nSincerely,\\n\\nPaper 3717 Authors\"}",
"{\"title\": \"We made significant improvements on our paper and resolved problems found by other reviewers. We feel it is unjustifiable to decrease the rating.\", \"comment\": \"Dear AnonReviewer1,\\n\\nThank you so much for taking the time to read our response and comments from other reviewers.\\n\\nThe most problems by other reviewers were on the writing and presentation of our paper (e.g., hard to understand, language issues, unclear definitions, etc). During the rebuttal period, we took this opportunity and greatly improved our paper. We have made a tremendous effort during the rebuttal period, not only to fix all problems reported by reviewers, but also significantly improved the quality of our paper. We have rewritten many paragraphs for enhancing clarity, included examples and nice-looking figures to explain our algorithm, and also reorganized and enriched all sections to ease understanding. We polished our paper several times to fix language issues.\\n\\nAdditionally, we presented theorems and proofs as you and other reviewers suggested, and also cited the reference you mentioned. We believe we have addressed questions by all reviewers.\\n\\nSince the problems found by other reviewers were based on the initial submission of our paper, and we have significantly improved our paper during the rebuttal period, we hope the reviewer can reevaluate our paper based on our latest revision. We feel it is unjustifiable to decrease the rating of our paper despite our tremendous efforts on addressing problems from all reviewers and significantly improving our paper.\\n\\nWe would like to thank you again for your constructive feedback. Could you please point out if there are any additional questions that we were not able to address in our responses? Thank you.\\n\\nSincerely,\\n\\nPaper 3717 Authors\"}",
"{\"title\": \"Impressive improvement of presentation\", \"comment\": \"Dear authors of paper 3717,\\n\\nThank you for your detailed response and for your efforts in improving the paper. I would like to compliment you on a great improvement in presentation! I really like the figures and the more non-expert-friendly introduction. I've increased my score to 5 in light of this update. \\n\\nRegards, \\nAnonReviewer4\"}",
"{\"title\": \"General Response: we greatly improved writing and presentation of our paper, and added proofs for completeness\", \"comment\": \"Dear Reviewers,\\n\\nWe really appreciate your constructive comments which are very helpful for improving our paper. During the discussion period, we take the opportunity to revise our paper based on the suggestions from reviewers. **We have significantly improved the writing and clarity of our paper**. \\n\\nSpecifically, we have significantly enhanced Section 2 for a comprehensive discussion of backgrounds, added a precise definition of the verification problem, included more examples with figures, and presented theorems for completeness. We rewrote many paragraphs for better clarity and also polished the paper several times to fix many language and consistency issues. Additionally, we also added ablation study experiments, comparisons to latest works and results on multi-core CPUs as requested by the reviewers. We believe our paper is much easier to follow now.\\n\\nOne question raised by the reviewers is that we did not state a theorem for the completeness of our algorithm. In our revision, we formally **added Theorem 3.1 and 3.2 and also included more discussions on the soundness and completeness** in Section 2.2, 2,3 and 3.2. We note that in many existing works on complete verification [1][2][3], the theorem for the completeness of branch and bound is omitted and not given explicitly (as they are conceptually straightforward), but we agree with the reviewers adding formal statements like our Theorem 3.1 and 3.2 is very helpful for understanding.\\n\\nWe feel the most concerns from the reviewers are on the clarity and presentation problems of our paper, and **our technical contributions are solid and impressive**, which is recognized by the highly confident AnonReviewer2. We hope our latest revision can address the concerns on the readiness for publication of our paper.\\n\\nSince today is the last day of the discussion period, we will really appreciate it if the reviewers can go over our latest paper revision and our detailed response, and reevaluate our paper based on them. Please feel free to ask us any questions you may still have and we will be more than happy to answer them before the deadline (Nov 24 AoE).\\n\\nThank you again for reviewing our paper and we look forward to discussing with you.\\n\\nSincerely,\\n\\nPaper 3717 Authors\", \"references\": \"[1]Rudy Bunel, Jingyue Lu, Ilker Turkaslan, P Kohli, P Torr, and P Mudigonda. Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research, 21(2020)\\n\\n[2]Rudy R Bunel, Ilker Turkaslan, Philip Torr, Pushmeet Kohli, and Pawan K Mudigonda. A unified view of piecewise linear neural network verification. In Advances in Neural Information Processing Systems, pp. 4790\\u20134799, 2018.\\n\\n[3] Jingyue Lu and M Pawan Kumar. Neural network branching for neural network verification. International Conference on Learning Representation (ICLR), 2020.\"}",
"{\"title\": \"Could you please reevaluate our paper based on our revision and response? Thank you!\", \"comment\": \"Dear AnonReviewer4,\\n\\nWe hope the reviewer can re-evaluate our paper based on our updated revision and detailed response, since the second stage of the discussion period is closing soon.\\n\\nAs suggested by the reviewer, we have added detailed background on verification and branch and bound (Section 2), and also include formal proofs for the completeness of our methods (Theorem 3.1 and 3.2). We provide an example of LiRPA and BaB in Figure 2. We fixed typos, rewrote many paragraphs for easier understanding, and clarified our speedup claim. We moved the related work section to the last and also refined it as suggested. \\n\\nWe believe the low rating from the reviewer is mostly based on the inadequate presentation of our idea and unpolished writing of the initial draft, but the technical contribution of our paper is solid, which is also recognized by the highly confident AnonReviewer2. We hope the reviewer can re-evaluate our paper based on the latest revision. We believe our paper is much easier to understand now.\\n\\nWe really appreciate the very constructive review from the reviewer. Your suggestions have helped us improve our paper a lot. Please feel free to let us know any additional questions you may have. Thank you.\\n\\nSincerely,\\n\\nPaper 3717 Authors\"}",
"{\"title\": \"Could you reevaluate our paper based on our revision and response? Thank you.\", \"comment\": \"Dear AnonReviewer3,\\n\\nSince the second stage of the discussion period is closing soon, we hope the reviewer can re-evaluate our paper based on our updated paper and detailed response.\\n\\nIn short, we have added a precise definition of the verification problem, and formal proofs for the completeness of our methods (Theorem 3.1 and 3.2). We also greatly improved the writing of our paper and fixed many language and consistency issues. Many paragraphs are rewritten for better clarity and the entire paper is polished several times. \\n\\nWe believe the low rating from the reviewer is mostly based on the inadequate presentation of our idea and unpolished writing of the initial draft. The technical contribution of our paper is solid, which is also recognized by the highly confident AnonReviewer2. We hope the reviewer can re-evaluate our paper based on the latest revision. We believe our paper is much easier to follow now and has reached the quality for publication.\\n\\nWe sincerely thank the reviewer again for the very insightful review, which greatly helped us to improve our paper. We will be glad to answer any additional questions you may have. Thank you.\\n\\nSincerely,\\n\\nPaper 3717 Authors\"}",
"{\"title\": \"We made our statements more clear, added ablation study and comparisons to your work\", \"comment\": \"Dear Alessandro,\\n\\nThank you for your interests in our work and we really appreciate your valuable comments and insightful questions. In our revised version, we have greatly improved the clarity of our paper, and we hope you can check it again. We now respond to your individual comments below:\\n\\n### 1. Usage of LiRPA bounding in previous works: \\n\\nThank you for pointing this out. We have revised that sentence in Section 1 to make it more clear. Previous works using LiRPA bounding did not use the optimization procedure as we proposed in Section 3.1, so they only produce relatively loose bounds. As you mentioned, the success of our work can be attributed to the much tighter bound after a joint optimization of intermediate layer bound and final bound. We have added an ablation study to show the importance of the optimization process in Appendix C.\\n\\nAdditionally [1] only conducts split on input domain to use the LiRPA bounds. In our paper, we split ReLU nodes instead, as it is usually more effective with high dimensional inputs (Bunel et al., 2018). In Theorem 3.1 we show that when splitting ReLU nodes, if LiRPA is used as the only bounding procedure, BaB is still incomplete. This is a critical observation that is also not discussed in previous works.\\n\\n### 2. About your work (Bunel et al. 2020a):\\n\\nThanks for clarifying the details of your great work (Bunel et al. 2020a). We have appreciated its contributions in terms of GPU acceleration without LP in Section 5 (related work) and we have provided detailed comparisons against it in Section 4 Table 1 and 2. \\n\\nThank you so much for providing the github link. The code link included in paper (Bunel et al. 2020a) does not contain the complete verification part, so it was difficult for us to make a comparison. In our paper, we currently directly report the best numbers in (Bunel et al. 2020a) in our tables. Compared to that number, we are still overall (up to 5X) faster.\\n\\nWe will look into the code you provided, make necessary adjustments, and reproduce the experiments in your paper. We plan to update the numbers for (Bunel et al. 2020a) in a later revision (those experiments can take some time).\\n\\n### Questions 1. Which bounds will optimization converge to if intermediate bounds are fixed?\\n\\nIf the intermediate bounds are fixed, the optimization converges to the optimal dual solution or the LP solution built on the same intermediate bounds. As you mentioned, this relationship has been shown in (Salman et al., 2019). However, the main difference here is that we also optimize intermediate bounds so stronger results can be obtained. \\n\\n\\n### Question 2. How is the tightening of intermediate bounds performed?\\n\\nSuppose we set the slope of lower bounds for all ReLU neurons as a vector variable, $\\\\alpha$. Based on this slope, we can compute intermediate bounds as a function of $\\\\alpha$. The final bound is a function of intermediate bounds and $\\\\alpha$, so it is essentially just a complex function of $\\\\alpha$. Then we use a gradient over $\\\\alpha$ to optimize the final layer bound as the objective.\\n\\n\\nThe tightening procedure is actually very straightforward because we use auto_LiRPA, an automatically differentiable implementation of LiRPA. We do not need to manually derive this gradient, and it is actually a very complicated function because of interactions between the slopes in different layers (the bounds are computed recursively).\\n\\n\\n### Question 3. Non-convexity:\\n\\nYes, the optimization problem is non-convex and significantly more complicated than LPs. However, any setting of a valid slope in [0, 1] yields a valid bound. So the non-convexity does not affect the soundness of our verifier. We use gradient based optimizers like Adam to optimize this bound. Convergence guarantee is not necessary here because it does not affect soundness. We do not aim to achieve the global optima, and in practice we actually only run a few steps (like 10 steps) of gradient descent which can greatly improve the solution in a very short time. The non-convexity is potentially helping us here, because it is a much more complicated objective than the previous LP dual based approach and can potentially yield tighter bounds. In practice, this non-convex problem can be sufficiently optimized with gradient descent and works very well.\\n\\n\\n### Question 4: Ablation study:\\n\\nWe provided ablation study in Appendix C. We find that our optimized LiRPA bound is indeed the most important contributor here, while batch split also helps to further speedup.\\n\\n\\nLastly, We thank you again for your comments and we also enjoy your paper very much. We hope you can check the latest version of our paper and we would like to further discuss with you. We are still working on revising this paper, so feel free to let us know if you have any additional comments. Thank you.\"}",
"{\"title\": \"New experiments on LiRPA on GPU vs CPU, revised paper\", \"comment\": \"Thank you so much for correctly recognizing the main contributions of our paper. We greatly appreciate your encouraging comments, and we are glad to answer your questions below.\\n\\n1. Additional experimental setup: we have updated our paper and provide experimental setup details in Appendix B. We will release our full source code once accepted.\\n\\n2. As suggested by the reviewer, we provided results on using single and multi-core CPU based LiRPA computation for our algorithm in Appendix C. Existing baselines such as BaBSR require an LP solver, which is hard to accelerate on GPU or even on multi-core CPU. The basic computation of LiRPA is just matrix multiplication (like NN training), so it naturally enjoys the parallelization in existing deep learning software libraries such as Pytorch. Figure 5 in Appendix shows that our LiRPA based method on a single CPU core is still competitive when compared to BaBSR+LP on a single CPU, and we enjoy a speedup on multi-core CPUs.\\n\\nAdditionally, we have greatly improved our paper in our revision, added examples to illustrate our problem under study and make the formulation and correctness of our algorithm more clear. We hope the reviewer can discuss our main contributions with other reviewers during the second and third stages of discussion. Thank you.\"}",
"{\"title\": \"We have added theorems and the additional reference\", \"comment\": \"We appreciate the helpful reviews from you, and we would like to address your concerns below:\\n\\n1. Proofs:\\n\\nIn our revised paper, we have greatly improved every section. In section 2, we discussed the soundness of LiRPA method. In section 3, we gave two Theorems to show the completeness of our full branch and bound algorithm based on the property of LiRPA method. In fact, the proof for correctness of branch and bound is relatively straightforward, and many previous works in complete verification [1] [2] [3] did not give an explicit proof and just imply it is correct (the proof would be very similar in every work). However, we have added proofs in our paper to ensure clarity. Specifically, in Section 3.2, we have discussed the completeness in Theorem 3.1 and 3.2. Theorem 3.1 shows that feasibility checking is important when using LiRPA as the bounding procedure in branch and bound, and Theorem 3.2 shows that with feasibility checking from LP, completeness is obtained, just like other works using branch and bound [1] [2] [3].\\n\\n2. Additional reference: \\n\\nThank you for providing this insightful connection! We have cited [Bentkamp, Blanchette JAR 2019] on proof assistant based higher order logic provers.\\n\\nLastly, we hope the reviewer can check out our revised paper. We gave more clear formal definitions of the verification problem, as well as more intuition, background and examples. We also include detailed discussions on soundness and completeness as well as proofs for correctness. We hope the reviewer can re-evaluate our paper based on our revision and update the rating. Thank you.\\n\\n[1]Rudy Bunel, Jingyue Lu, Ilker Turkaslan, P Kohli, P Torr, and P Mudigonda. Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research, 21(2020)\\n\\n[2]Rudy R Bunel, Ilker Turkaslan, Philip Torr, Pushmeet Kohli, and Pawan K Mudigonda. A unified view of piecewise linear neural network verification. In Advances in Neural Information Processing Systems, pp. 4790\\u20134799, 2018.\\n\\n[3] Jingyue Lu and M Pawan Kumar. Neural network branching for neural network verification. International Conference on Learning Representation (ICLR), 2020.\"}",
"{\"title\": \"Paper greatly improved, easier to understand. Added examples, inituitions, and proofs for completeness.\", \"comment\": \"We really thank the reviewer for all the suggestions on improving our paper, and help us find many typos. We address your concerns below:\\n\\n### Concern 1: Hard to understand, require a lot of background knowledge\\n\\nThank you for pointing out this concern. We have greatly improved our paper in terms of introducing sufficient background and motivations especially for researchers outside of this field. In our revised Introduction, we have followed your suggestions and made the definition of verification problem very clear (in Introduction), and given a detailed walkthrough of background (section 2), and examples and figures to illustrate our main idea (Figure 1 and 2).\\n\\n### Concern 2: Proof for completeness:\\n\\nWe include more discussions on soundness and completeness in Section 2 and 3. In Section 2, the soundness of LiRPA has been proved in previous work (Xu et al., 2020) and we add a discussion on page 4. In Section 3.2, we have discussed the completeness in Theorem 3.1 and 3.2. Theorem 3.1 shows that feasibility checking is important when using LiRPA as the bounding procedure in branch and bound, and Theorem 3.2 shows that with feasibility checking from LP, completeness is obtained, just like other works using branch and bound [1] [2] [3].\\n\\nIt is worth noting that many existing important works on complete verification such as [1] [2] [3] do not have a completeness proof, and it seems the completeness of the BaB process is implied, so most papers did not give proofs explicitly, and such a proof can look almost the same in every work. However, we do agree with the reviewer that we need to discuss more on the completeness of our algorithm and add explicit theorems for completeness, and we have done so in our revision.\\n\\n### Concern 3: Speedup claim:\\n\\nThank you for pointing this out! We forgot to update these numbers in introduction when our experiment results were updated. We have fixed the speedup claims in our paper. Our speedup is 30X compared to basic BaB baselines, and up to 5X compared to the state-of-the-art verifier. In our revision, we also added a very recent baseline (proximal BaBSR) and our method still performs best.\\n\\n### Suggestion 1: Introduce more examples\\n\\nFollowing the reviewer\\u2019s suggestion, we have added Figure 2 to explain LiRPA bounds and the BaB procedure. Additionally, we also introduce more examples in the text: for example, in the introduction, we give the definition of the verification problem and show the example after the definition. We hope these updates will make our paper much easier to understand.\\n\\n### Suggestion 2: Related work section\\n\\nAs suggested by the reviewer, we have moved the related work section to the end of the paper, and enhanced the background section. We introduce the basic definitions of verification problems and the notation of completeness as early as in Introduction, and also give more former notations in section 2.\\n\\n### Typos:\\n\\nThank you for pointing out these typos. We have fixed them and also greatly improved writing and clarity in our new revision. We added the definition of verification, completeness and incompleteness in Introduction. \\n\\n### Conclusion:\\n\\nWe have significantly improved the writing of our paper and provide sufficient background and intuitions in our updated paper. We have also formally discussed and proven the completeness of our proposed algorithm. Since the most concerns on our paper are about writing and representation, we hope these changes address the concerns of the reviewer, and hope the reviewer can reconsider the score based on our revision. Feel free to let us know if you have any further questions regarding our revision. Thank you.\\n\\n[1]Rudy Bunel, Jingyue Lu, Ilker Turkaslan, P Kohli, P Torr, and P Mudigonda. Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research, 21(2020)\\n\\n[2]Rudy R Bunel, Ilker Turkaslan, Philip Torr, Pushmeet Kohli, and Pawan K Mudigonda. A unified view of piecewise linear neural network verification. In Advances in Neural Information Processing Systems, pp. 4790\\u20134799, 2018.\\n\\n[3] Jingyue Lu and M Pawan Kumar. Neural network branching for neural network verification. International Conference on Learning Representation (ICLR), 2020.\"}",
"{\"title\": \"Proof has been formally added; paper revised with a clear definition of problem\", \"comment\": \"We greatly appreciate the very helpful comments from the reviewer. We have added necessary proofs, reorganized and revised many parts of our paper to make it easier to understand. We hope the reviewer can check out the updated version of our paper. We provide detailed answers below:\\n\\n1. Formalization and definition of the task of verification:\\n\\nTo make the verification problem very clear, we now have added a definition at the beginning of introduction, and also give an example after the definition. In section 2, we also give a more formal definition with discussions. We forgot to clearly define the properties under verification because the benchmarks used in our paper are fairly standard in this field, but now it has also been added everywhere, including in the experiment section. We hope it is now easier to understand the precise task of verification.\\n\\n2. Proofs:\\n\\nWe include more discussions on soundness and completeness in Section 2 and 3. In Section 2, the soundness of LiRPA has been proved in previous work (Xu et al., 2020) and we add a discussion on page 4. In Section 3.2, we have discussed the completeness in Theorem 3.1 and 3.2. Theorem 3.1 shows that feasibility checking is important when using LiRPA as the bounding procedure in branch and bound, and Theorem 3.2 shows that with feasibility checking from LP, completeness is obtained, just like other works using branch and bound [1] [2] [3].\\n\\nIt is worth noting that many existing important works on complete verification such as [1] [2] [3] do not have a completeness proof, and it seems the completeness of the BaB process is implied, so most papers did not give proofs explicitly, and such a proof can look almost the same in every work. However, we do agree with the reviewer that we need to discuss more on the completeness of our algorithm and add explicit theorems for completeness, and we have done so in our revision.\\n\\n3. Paper hard to follow, confusing sentences, language issues, typos\\n\\nWe have fixed all confusing sentences and typos you mentioned and also greatly improve the writing and language of this paper. We added a clear definition of the verification problem, and also used more formal language to introduce the background and our algorithm in section 2 and 3. We also added figures and examples to illustrate our ideas more clearly. We rewrote those sentences that were hard to understand. The overall flow and clarity of our paper have greatly improved now, and we hope you can take a look again.\", \"conclusions\": \"We would like to thank the reviewer again for your constructive feedback. We hope you can read our revised paper once again where we have made great effort to improve writing and readability. We hope our answers address all your concerns and hope you can re-evaluate our revised paper, because the main issue was mainly language and representation problems rather than technical ones. Please kindly let us know if you have any additional comments. Thank you.\\n\\n\\n\\n[1]Rudy Bunel, Jingyue Lu, Ilker Turkaslan, P Kohli, P Torr, and P Mudigonda. Branch and bound for piecewise linear neural network verification. Journal of Machine Learning Research, 21(2020)\\n\\n[2] Jingyue Lu and M Pawan Kumar. Neural network branching for neural network verification. International Conference on Learning Representation (ICLR), 2020.\\n\\n[2]Rudy R Bunel, Ilker Turkaslan, Philip Torr, Pushmeet Kohli, and Pawan K Mudigonda. A unified view of piecewise linear neural network verification. In Advances in Neural Information Processing Systems, pp. 4790\\u20134799, 2018.\"}",
"{\"title\": \"Interesting work that lacks a detailed presentation and analysis\", \"comment\": \"I believe this is an interesting work and contains some valuable ideas for the neural network verification community, such as the joint optimization on intermediate bounds and output bounds. However, the paper is lacking crucial details and inaccurately describes the state of the art for complete neural network verification.\\n\\nFor instance, in the introduction, it is claimed that LiRPA-based bounds \\u201chave never been explored as the main driver in the complete verification settings\\u201d. However, LiRPA bounding has been successfully employed in the past as last layer bounding for complete verification. For instance, by [1] (see Bunel et al., (2020b), section 4.1). Moreover, GPU-accelerated and massively parallel branch and bound frameworks already exist. The work by Bunel et al. (2020a) does not rely on LP solvers and, similarly to the proposed approach, iteratively tightens the bounds of a LiRPA method (Wong and Kolter, 2018), from which it is initialized.\\n\\nThe main contribution of the paper is then a gradient-descent based approach that optimizes over the lower bound slopes of LiRPA-based bounds. \\nThe choice of the LiRPA formulation seems to allow for joint optimization over intermediate bounds, and I believe this is the key to the works\\u2019 empirical success.\", \"i_would_like_to_ask_the_authors_for_the_following_clarifications\": \"1)\\tLet us assume that intermediate bounds are kept fixed throughout the procedure. Which bounds will this optimization converge to? Due to the primal-dual correspondence of LiRPA methods described by Salman et al. (2019), it seems likely that the authors might be solving a formulation resembling the dual presented in Theorem 1 of (Wong and Kolter, 2018). \\n2)\\tHow is the tightening of intermediate bounds performed? \\nAre the intermediate slopes ($D^i_l$) obtained from the output bounding simply plugged in the LiRPA problems for the intermediate bounds? If so, is this done after each iteration of gradient descent? \\n3)\\tIn section 4.1, the authors seem to suggest that the upper intermediate slopes ($D^i_u$) are treated as a function of the lower slopes ($D^i_l$) before them. Does this make the optimization problem non-convex? If so, how is the non-convexity handled?\\n\\nAdditionally, I believe the empirical comparison would benefit from the addition of a GPU-accelerated baseline such as the work by Bunel et al., (2020a), whose complete verification pipeline is available at [2]. Moreover, an ablation study measuring the effect of the tightened intermediate bounds would be quite interesting.\\n\\n[1] Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Formal security analysis of neural networks using symbolic intervals. USENIX 2018.\\n[2] https://github.com/verivital/vnn-comp/tree/master/2020/CNN/oval_framework\"}",
"{\"title\": \"Strong experimental results, straightforward approach\", \"review\": \"The authors demonstrate that using a modification of the LiRPA method during the branch-and-bound process for solving the neural network verification problem can lead to significant speed-ups. The experimental results are strong. The authors convincingly show that the their method outperforms the existing state-of-the-art method by Lu & Kumar (2020) on an experimental setup similar to that work. The application of LiRPA to branch-and-bound is straightforward (since any incomplete verifier can be used), as is the use of gradient descent to improve the bound given by LiRPA (a standard technique applied to improve the bounds of certain verifiers).\\n\\nDespite the fairly straightforward approach, the strength of the empirical results deserves attention. Overall, a solid contribution to the literature, and proof that research on incomplete verifiers leads to better complete verifiers. \\n\\nSome questions/requests:\\n\\n- The experimental setup details should be provided in the final version. \\n\\n- How dependent is the performance of LiRPA on GPUs? For example, if we do a CPU-only comparison between the different methods, would other methods now outperform LiRPA? And if so by how much? What if we use multiple cores? I would understand if a detailed comparison is too computationally intensive, but I would like some sense of this.\\n\\n---------------------------\", \"update_after_author_response\": \"I thank the authors of the paper for significantly improving the prose of the paper, and I agree that the changes make the paper more self-contained and approachable. I have kept my ratings as my score was for primarily for the strong experiment results (and the score was also conditional on the paper being more polished). I am happy to support this paper for acceptance, but I am a little concerned about the degree of changes in the final version versus the initial submission, given the number of concerns the other reviewers had.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"No Proof of Completeness?\", \"review\": \"The work proposes a new algorithm that can be used for the complete verification of neural networks (NNs). Unfortunately, the authors do not define the verification problem they study: Based on the second paragraph of the introduction, one is given a neural network (NN) on the input, and the task is to determine whether the NN has a specific formally defined property - but which kind of properties are verified is never explained. Intuitively, one would expect verification to focus on determining whether the NN gives a \\\"correct\\\" output for certain inputs, but that does not really match the general description given in the paper, and I did not find a place where the verification problem is formalized further. Without knowing what \\\"verification\\\" means in the context of this paper, it's difficult to follow the reasoning provided in the paper without having some rough idea of what kind of properties one wishes to verify (for instance, the discussion about using LP bounds assumes that the property that is being verified can be expressed using LP). I believe this issue could have been avoided by formalizing the precise task of verification (the verification problem).\\n\\nOn that note, many parts of the paper seemed rather confusing and hard to follow, either due to inconsistencies or due to language issues. For instance, the sentence \\\"Input domain split is shown effective in verifying the properties with low input dimensions while performs as poorly as incomplete verifiers on higher dimension properties\\\" on page 2 seems to contradict the definitions given for complete and incomplete verification. How can a complete verifier perform \\\"as poorly as incomplete verifiers\\\" if, by the definition given in the paper, complete verifiers must always correctly determine whether the NN has the given \\\"property\\\" or not? (Section 2: \\\"Complete verifiers guarantee to terminate either the property is proved or a violation is located.\\\")\\n\\nIn terms of presentation, the submission contains an incredibly large number of minor language issues (roughly 1 per 2-3 lines on average, ranging from minor article issues to malformed sentences; see also the quote in the previous paragraph), and I strongly encourage authors to fix these as they have a rather disruptive effect when trying to read and understand the paper. A very small number of examples is provided below:\\nPage 1\\n-\\\"cause the changes of NN predictions\\\" -> \\\"cause changes of NN predictions\\\"\\n-\\\"Recently, a framework of Branch and Bound (BaB) (Bunel et al., 2018) is widely used for efficiently verifying NNs\\\" - cannot combine \\\"recently\\\" and \\\"is\\\".\\n-\\\"adopts Linear Program (LP) bounding procedure\\\" -> \\\"adopts a Linear Program (LP) bounding procedure\\\"\\nPage 2\\n-\\\"for construct LPs\\\" -> \\\"for constructing LPs\\\"\\n\\nThe main contribution of the paper is the use of incomplete verifiers for complete verification, and the authors propose an algorithm for doing that using LIRPA bounds. However, I found no proof (or anything resembling a proof) showing that the resulting algorithm is correct, i.e., that it performs complete verification for neural networks. In fact, the problem is not even properly and formally defined in the paper. Hence, regardless of the experimental results, I do not think that the submission is ready for publication at this stage.\", \"post_rebuttal_comment\": \"I thank the authors for responding to my comments. The updated version fixes most of the criticisms raised in the review, and I have raised the score accordingly. My new score is \\\"5\\\", partly because I believe that after performing such a large-scale and comprehensive overhaul of the paper (which was certainly necessary), the paper should go through a full new reviewing process.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Difficult to understand\", \"review\": \"### Summary\\nThis paper describes a branch-and-bound (BaB) process for neural network verification that uses linear relaxation based perturbation analysis (LiRPA). It gives a way to tighten the bounds obtained via LiRPA. Overall, this results is a complete verification procedure, which is an order of magnitude faster than existing linear programming (LP) procedures.\\n\\n### Strengths\", \"the_biggest_strength_of_the_paper_is_the_impressive_experimental_results_in_section_5\": \"the method described in the paper is several times faster than previous methods.\\n\\n### Concerns \\nMy main concern is that the paper is very difficult to understand. It seems to require a lot of background knowledge about the problem and the related literature, which is not clearly provided in the paper. I had trouble understanding the problem, the setup, and the proposed algorithm. \\n\\nAnother concern is that the paper claims that the proposed verifier is complete, but there is no proof of that. It does not seem like that's something too difficult to prove (given that BaB + LP is complete), but it should still be clearly stated. \\n\\nFinally, the claim that the proposed framework outperforms previous methods by \\\"at least 10X and up to 50X\\\" is unsupported. Based on the results in section 5, a fairer statement regarding the speed would be \\\"at least 3X and up to 15X\\\" faster.\\n\\n### Reasons for score\\nIt is very difficult to judge this contribution, as the paper is hard to understand. The two main reasons for the score I give are 1) some of the claims of the paper are unsupported (see above) and 2) I believe this paper will have a much better chance of conveying the idea and making a contribution, if more background knowledge and intuition is provided throughout.\\n\\n### Suggestions for improvement that have not affected the score I gave to the paper\\n\\nOne way to significantly improve the paper is to introduce more examples. An example consisting of a simple neural network to refer to throughout the explanation of LiRPA and BaB, and also in section 4, would make the paper much easier to read. \\n\\nAs a reader, I felt I couldn't appreciate the related work section so early in the paper. I encourage you to either move it later in the paper, or even better: introduce more background/examples in the introduction, as well as the notion of completeness, so that the related work is easier to understand.\", \"some_typos\": [\"Abstract: \\\"we demonstrate over a magnitude speedup ...\\\" -> \\\"we demonstrate speedup of an order of magnitude ...\\\"\", \"First paragraph of section 2: \\\"guarantee to terminate either ...\\\" -> \\\"guarantee to terminate when either ...\\\"\", \"First paragraph of page 3: \\\"used in state-of-the-art verifier (Lu & Kumar, 2020)\\\" -> \\\"used in the state-of-the-art verifier by Lu & Kumar (2020)\\\".\", \"Second paragraph of page 3: \\\"Our paper firstly leverage ...\\\" -> \\\"Our paper firstly leverages ...\\\"\", \"First paragraph of section 3.1: \\\"linear functions in the form of ...\\\" -> \\\"linear functions of the form ...\\\"\", \"Start of section 4.1.: \\\"As we have introduced ...\\\" -> \\\"As we discussed ...\\\"\", \"Mid page 5: \\\"greatly limited ...\\\" -> \\\"greatly limits ...\\\"\", \"Bottom of page 5: \\\"We follow the most challenge experimental setup ...\\\" -> \\\"We follow the most challenging experimental setup ...\\\"\", \"Mid page 6: \\\"we quickly reaches ...\\\" -> \\\"we quickly reach ...\\\"\", \"Mid page 6: \\\"with only two hidden node ...\\\" -> \\\"with only two hidden nodes ...\\\"\", \"Section 4.3: \\\"Benefited from our design ...\\\" -> \\\"Benefitting from our design ...\\\"\", \"Conclusion: capitalisation of the first sentence\", \"### Post rebuttal\", \"Thank you to the authors for their detailed response and their effort in improving the presentation of the paper. I was impressed with how much the paper improved in this second version. In particular, I very much appreciate that the introduction starts with a simple one sentence explanation of the problem of neural network verification. This can be further improved if it included (almost) no maths, which can be deferred to the Background section. The figures in the updated paper are very good and a huge improvement of presentation. Finally, the paper now includes two clearly stated theorems, which also make the presentation and contribution much clearer.\", \"I have increased the score I gave to the paper. Regardless of what the outcome for ICLR will be, I would like to encourage the authors to re-iterate on the presentation to really crystallize the problem, definitions and the suggested approach --- the paper is already so much better than the first version, and even just a little more work can make it even better.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"Verifying simple neural network properties on a GPU\", \"review\": \"The paper focuses on verifying simple properties of neural networks on\\naccelerator hardware. Instead of using linear programming, Lirpa is\\nconsidered as an alternative and minimal amounts of LP is added around\\nto allow to use the same class of properties. In the considered\\nexamples the approach becomes much faster than the LP approach.\\n\\nMy main problem with the paper, is that is claims a complete\\nverification procedure without proper proofs. Yes, the new algorithm\\nis presented and some general discussion of things that are done with\\nit and how they work is provided. However, to claim complete\\nverification a general soundness of the procedure should be proved. In\\nparticular I would like a theorem for the correctness of each of the\\nvarious components and a combined theorem for the whole procedure.\\n\\nOn the other hand, the experiments show that the approach is fast and\\nas such makes it more feasible for verification and the paper mostly\\nreads well.\\n\\nAn interesting alternative to discuss in related work, could be proof\\nassistants. See for example the work of Bentkamp, Blanchette JAR 2019.\\nUsing proof assistants based on more complex logics, one can verify\\nproperties much more efficiently. The authors only mention Katz's\\nwork on SMT, but if you consider higher order logic and the logics of\\ninteractive theorem provers, the \\\"NP-hard properties\\\" can be checked\\nwithout considering all the cases.\", \"minor\": \"Conclusion starts with lowercase \\\"we\\\".\\n\\nBased on the problems found by the other reviews and having read the rebuttals I have modified my score.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
bgQek2O63w | Self-supervised Adversarial Robustness for the Low-label, High-data Regime | [
"Sven Gowal",
"Po-Sen Huang",
"Aaron van den Oord",
"Timothy Mann",
"Pushmeet Kohli"
] | Recent work discovered that training models to be invariant to adversarial perturbations requires substantially larger datasets than those required for standard classification. Perhaps more surprisingly, these larger datasets can be "mostly" unlabeled. Pseudo-labeling, a technique simultaneously pioneered by four separate and simultaneous works in 2019, has been proposed as a competitive alternative to labeled data for training adversarially robust models. However, when the amount of labeled data decreases, the performance of pseudo-labeling catastrophically drops, thus questioning the theoretical insights put forward by Uesato et al. (2019), which suggest that the sample complexity for learning an adversarially robust model from unlabeled data should match the fully supervised case. We introduce Bootstrap Your Own Robust Latents (BYORL), a self-supervised learning technique based on BYOL for training adversarially robust models. Our method enables us to train robust representations without any labels (reconciling practice with theory). Most notably, this robust representation can be leveraged by a linear classifier to train adversarially robust models, even when the linear classifier is not trained adversarially. We evaluate BYORL and pseudo-labeling on CIFAR-10 and ImageNet and demonstrate that BYORL achieves significantly higher robustness (i.e., models resulting from BYORL are up to two times more accurate). Experiments on CIFAR-10 against $\ell_2$ and $\ell_\infty$ norm-bounded perturbations demonstrate that BYORL achieves near state-of-the-art robustness with as little as 500 labeled examples. We also note that against $\ell_2$ norm-bounded perturbations of size $\epsilon = 128/255$, BYORL surpasses the known state-of-the-art with an accuracy under attack of 77.61% (against 72.91% for the prior art). | [
"self-supervised",
"adversarial training",
"robustness"
] | Accept (Poster) | https://openreview.net/pdf?id=bgQek2O63w | https://openreview.net/forum?id=bgQek2O63w | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"xSAe703dTs",
"363UeQ7I4NF",
"zZlglicJRKR",
"B3x3LbbNi8",
"GC_YsD2-waM",
"6BTgI9DqXUM",
"6Atinlf5rW-",
"hdYLn61A5BB",
"rZ18KBiedI6",
"kdsHSLS7iDQ",
"7QOCOetSQbq",
"zg5OwmTB_wj",
"4axSJPSrrqk",
"cNELwqkOz0j",
"RCQOECCAsis",
"JsjYtTIslq",
"QJyNp6a1Jk",
"k8cwa0R9_j"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040432139,
1606240853137,
1606152261420,
1606110954557,
1605880490578,
1605879349019,
1605878994270,
1605550033174,
1605284593281,
1605203812571,
1605201832887,
1605201675234,
1605200698482,
1605200011299,
1604001888659,
1603947613479,
1603897185393,
1602971103418
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3713/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3713/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3713/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3713/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3713/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3713/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3713/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3713/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3713/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3713/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3713/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3713/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3713/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3713/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3713/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3713/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3713/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper considers the use of adversarial self-supervised learning to render robust data representations for various tasks, in particular to integrate the Bootstrap Your Own Robust Latents (BYOL) with adversarial training, where a small amount of labeled data is available together with a sizable unlabeled dataset. Especially the low-data regime is of interest. It extends a previous method with a new adversarial augmentation technique, it is compared against several methods, and the robust representations are shown to be useful more generally. There were some confusing presentations and questions that were resolved in a detailed discussion with the reviewers.\"}",
"{\"title\": \"Satisfactory and convincing results\", \"comment\": \"The authors have addressed al most all of my concerns quite convincingly. In fact the large scale Imagenet results are a great addition to the proof of concept.\\n\\nOf course, increasing my earlier score to clear accept.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Dear reviewer,\\n\\nAs the rebuttal period finishes tomorrow, we would appreciate any additional comments or requests for clarification.\", \"related_to_the_following_comment\": \"> If we look at the low-label regime alone, probably the paper has good contributions, that on ImageNet is not entirely convincing. If the authors convince me on that, happy to increase the score to clearly accept.\\n\\nWe highlight that the new results in the low-label regime (1% of labels) on ImageNet show a robust accuracy of 29.49% which surpasses by a large margin the clean accuracy obtained by a standard classifier for the same amount of labels. In the high-label regime (100% of labels), representations obtained by BYORL surpass classical adversarial training.\"}",
"{\"title\": \"Thanks for the careful response.\", \"comment\": \"After taking a close look at the authors' response, most of my concerns have well been addressed and I will increase my score to 6. Thanks for the great effort.\\n\\nMy remaining concern is about ImageNet. \\n\\n\\\"The latest manuscript revision now contains updated results for ImageNet.\\\"\\n\\nWhich table/figure did you refer to? \\n\\nI believe that robust ImageNet pre-training for various downstream tasks is a more important and challenging question for adversarial defense. However, I understood \\\"We were unable to obtain better representations when pre-training BYORL representations on ImageNet at epsilon = 8/255.\\\" Considering that SOTA was achieved under the CIFAR-10 pre-trained model, I believe that this submission has its own merits. This is my reason for increasing the score to 6.\"}",
"{\"title\": \"Thank you\", \"comment\": \"We would like to thank all reviewers for the time and effort spent reviewing this paper. We believe that we have addressed all concerns so far. However, if you have time, please indicate if there are any other concerns of yours which we have not addressed and we would be pleased to clarify those points.\", \"here_is_a_summary_of_updates_made_to_the_manuscript\": [\"Improvements to the related work section which now includes more papers and highlights the differences with our approach more clearly (Sec 2).\", \"Clarifications on the algorithm and general improvements to the text (Sec 3.3)\", \"Clean accuracy is now reported in Fig. 7 and 8.\", \"More experiments on the the transfer of robustness without adversarial training (Table 6 and updated Table 5) .\", \"Updated ImageNet results which demonstrate improvements over classical adversarial training (Table 5).\"], \"additionally\": [\"We addressed any concerns regarding gradient obfuscation (see answer https://openreview.net/forum?id=bgQek2O63w¬eId=4axSJPSrrqk).\", \"We performed an ablation study on the choice of classification head (see https://openreview.net/forum?id=bgQek2O63w¬eId=rZ18KBiedI6)\"]}",
"{\"title\": \"Non-robust fine-tuning on ImageNet\", \"comment\": \"The latest manuscript revision now contains updated results for ImageNet. Similarly to the previous results, we also observe that robustness transfers well when using non-robust fine-tuning.\\n\\nWe sincerely appreciate your time and effort to review our paper. We believe we have addressed any remaining comment or question. If you have time, please indicate if there are any other concerns of yours which we have not addressed, we would be pleased to clarify those points.\"}",
"{\"title\": \"Non-robust finetuning results on ImageNet\", \"comment\": \"The final results are now available in latest revision of manuscript. In particular, our models surpass classical adversarial training when 100% of the labels are available.\"}",
"{\"title\": \"Updated ImageNet results\", \"comment\": \"We have finished the evaluation of our newest ImageNet models (with robust fine-tuning) against eps = 4/255.\\n* With 1% of available labels: Robust accuracy is 28.59%, Clean accuracy is 44.64%.\\n* With 10% of available labels: Robust accuracy is 38.96%, Clean accuracy is 61.51%.\\n* With 100% of available labels: Robust accuracy is 41.83%, Clean accuracy is 63.29%.\\n\\nOverall, results are very encouraging as they surpass the performance of classical adversarial training [2]. We are now in the process of evaluating standard fine-tuning and will update the paper with these newer numbers.\\n\\nThe previously reported number of 47.6% was computed against AutoPGD-100 with 1 restart, as opposed of the combination of AutoAttack and MultiTargeted which is used for the numbers above and throughout the paper.\\n\\n[2] Qin et al., \\\"Adversarial Robustness through Local Linearization\\\" , 2019\"}",
"{\"title\": \"Additional results\", \"comment\": \"Likewise, thank you for the quick response.\\n\\n> I will really appreciate the authors to point out which table I should look into. I was expecting to see a similar supplementary table like Table 2 showing the 'robustness transferability' of \\\\ell_\\\\infty-adv-robust pre-trained BYORL.\", \"we_apologize_for_the_misunderstanding\": \"we meant that different experiments used different threat models (not necessarily the same experiment). In the meantime, we re-did Table 2 for l-inf perturbations of size 8/255. The new l-inf table is in the revised manuscript (Table 6 in the appendix) and shows similar conclusions to Table 2. Note that a table dedicated to ImageNet is already there (Table 5). All results indicate that it is possible to transfer robustness across different adversarial scenarios.\\n\\n> Additional question based on Table 5 in Appendix, will ImageNet robust representation pretraining help? Compared to Table 2 using CIFAR-10 pre-trained network?\\n\\nThis is an excellent question. In a bid to improve results against l-inf perturbations of size 8/255, we had tried the suggested experiment without much success on CIFAR-10. We were unable to obtain better representations when pre-training BYORL representations on ImageNet at epsilon = 8/255. There are two possible explanations:\\n1. It is possible that using a large epsilon (such as 8/255) is too destructive for ImageNet-sized images which contain a lot of details and textures. Prior work demonstrated that at 16/255, adversarial attacks are able to completely modify images (see Fig. 1 in [8]).\\n2. ImageNet might not be as useful for adversarial robustness on CIFAR-10 as additional images from the 80M Tiny Images dataset. Pre-training on ImageNet [7] is currently the lowest performing model using additional data on https://robustbench.github.io/.\\n\\nHowever, it is hard to draw any conclusions and we should re-evaluate this experiment with a smaller epsilon (4/255) or against l2 perturbations. It is likely that we will not have time to thoroughly perform this experiment before the end of the rebuttal period (as training robust ImageNet representations take close to 1 week), but we will strive to have it done as soon as possible.\\n\\n> If the classification head is the root cause, then I would suggest having more results/empirical evidence to support it. Ablation studies on the choice of classification head or fine-tuning strategy (e.g., partial vs. full-network fine-tuning) would be nice.\\n\\nThe classification head is one issue (but not the only cause). That is, if we train a model $y = f \\\\circ g (x)$ such that $y$ is robust to adversarial attacks on $x$, there is no guarantee that $z = g(x)$ will itself be robust. Robustness will depend on the loss/training used (e.g., in [2], representations stemming from the \\\"rotation\\\" pre-training are more robust under partial adversarial fine-tuning) as well as the architectures that parametrize $f$ and $g$. That being said, we performed the following experiment where instead of fine-tuning a linear head, we fine-tuned MLPs with 2 and 3 hidden layers (non-robustly and using the best pre-trained l-2 robust model):\\n* Linear head: Robust accuracy of 78.22% (against l-inf perturbations and AutoPGD-100).\\n* 2-hidden layers MLP (256 hidden units per layer): Robust accuracy of 71.8%\\n* 3-hidden layers MLP (256 hidden units per layer): Robust accuracy of 70.9%\\n\\nThese results demonstrate that using a deeper head reduces robustness when standard fine-tuning is used. The results, however, are quite encouraging, as non-trivial level of robustness are retained. This would indicate that BYORL itself has a greater ability to enforce that $z$ is robust and useful for downstream tasks than the approach in [2].\\n\\nFull-network finetuning reduces robust accuracy to 0% if trained for sufficiently long (as shown in [2]). This is in part explained by the use of standard cross-entropy which will continue to push correct logits as high as possible (at the expense of smoothness and ultimately robustness).\\n\\n[2] Chen et al. \\\"Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning.\\\", 2020.\\n[7] Hendrycks et al., \\\"Using Pre-Training Can Improve Model Robustness and Uncertainty\\\", 2019\\n[8] Zoran et al., \\\"Towards Robust Image Classification Using Sequential Attention Models\\\", 2019\"}",
"{\"title\": \"Thanks for prompt response.\", \"comment\": \"The authors have addressed some of my concerns except the following highlighted ones:\\n\\n(a) \\\" We decided to split the different ablation experiments between l-2 and l-inf. There was no particular reason to focus l2 for this table (more experiments using l-inf are in the appendix). \\\"\\n\\nI will really appreciate the authors to point out which table I should look into. \\n\\nI was expecting to see a similar supplementary table like Table 2 showing the 'robustness transferability' of \\\\ell_\\\\infty-adv-robust pre-trained BYORL. \\n\\n(b) Additional question based on Table 5 in Appendix, will ImageNet robust representation pretraining help? Compared to Table 2 using CIFAR-10 pre-trained network?\\n\\n(c) \\\"As explained earlier in this answer, there is a rather large difference between our approach and the one in [2]. In [2], the authors fine-tune a deep non-linear model on top of their frozen representations, whereas we only fine-tune a linear model. We will make this distinction clear. In theory, we expect a drop of robust accuracy when using standard fine-tuning. In practice, that drop is relatively minor (at least on CIFAR-10 in the case of BYORL).\\\"\\n\\nIf the classification head is the root cause, then I would suggest having more results/empirical evidence to support it. Ablation studies on the choice of classification head or fine-tuning strategy (e.g., partial vs. full-network fine-tuning) would be nice. \\n\\nIn summary, if authors can convince me that 1) transfer robustness is achieved by the proposal under different adversarial scenarios (\\\\ell_infty, \\\\ell_2, ImageNet-pre-training) and 2) Linear-fine-tuning is sufficient to preserve robustness\\uff0c then it will enforce my review toward the positive side.\"}",
"{\"title\": \"Clarifications and new ImageNet results\", \"comment\": \"Thank you for the detailed review. We highlight that we obtained new results on ImageNet which now surpass the known state-of-the-art against l-inf perturbations of size 4/255 (see end of answer).\\n\\n> Overall, the paper is well written except for a few minor grammatical errors, easy to follow and understandable. The paper discusses the existing literature and positions the proposed approach with respect to the state-of-the-art. The proposed method is definitely a decent contribution towards the field. While robustness evaluations are good, the transfer to unseen and without adversarial training are much more encouraging.\\n\\nWe appreciate that the main results of the paper were clear. Indeed, the fact that robust representations transfer well (even when using non-robust fine-tuning) is very encouraging and could impact many downstream applications.\\n\\n> In abstract \\u201c... pioneered by four separate and simultaneous work in 2019,\\u201d should be \\u201c... pioneered by four separate and simultaneous works in 2019,\\u201d + \\u201cSince Madry et al. (2017), various modification to their ...\\u201d should be \\u201cSince Madry et al. (2017), various modifications to their ...\\u201d\\n\\nThank you. These are now corrected.\\n\\n> As the % of labeled training data increases, there is not a significant improvement in robust test accuracy for CIFAR-10, that increases for CIFAR-100. Why is that happening for CIFAR-10?\\n\\nA possible explanation could be that CIFAR-10 only contains 10 classes. Hence with 500 examples, the model would see 50 examples per class. For CIFAR-100, the model would only see 5 examples per class.\\n\\n> On CIFAR-100, for anything over 10% labeled training data the other methods are still the state-of-the-art, the proposed method does not perform well. It again probably is due to the above observation of robust accuracy not increasing with a steep slope like the other methods.\", \"the_true_reason_why_this_happens_may_be_two_fold\": \"(i) CIFAR images have very low resolutions and self-supervised techniques rely of data augmentation schemes (such as heavy cropping) which might make things worse, especially when the number of classes increases (i.e., dataset diversity); (ii) BYORL has a much harder time with l-inf perturbations (at least for the hyper-parameters we chose). We also note that self-supervised techniques (with linear fine-tuning) do not yet match standard training performance in all situations (e.g., best supervised techniques reach > 88% top-1 accuracy on ImageNet, whereas the best self-supervised techniques reach 80% under linear evaluation), but they have the advantage of building general representations that can be used for many downstream tasks.\\n\\n> What is robust accuracy, Zhang et al. 2019 or Uesato et al 2019 or others have it defined, nevertheless, please say in a sentence what it is.\\n\\nThis has been added to manuscript.\\n\\n> On the ImageNet, the hypothesis of BYORL to get better as the labeled data increases is hard to buy given what we saw on CIFAR-10 and CIFAR-100. For those two, BYORL starts better and the state-of-the-art methods either reach BORL or outperform it. In ImageNet case, BYORL is outperformed with 1% itself, unless supported with empirical evidence, it is hard to believe the above statement of improvement.\\n\\nThankfully, we had anticipated this question and did perform new experiments on ImageNet in a bid to improve our original results.\\n\\nFirst, we highlight that Table 5 does not contain any results from the baseline (UAT), it only shows how robustness transfers (with robust and non-robust fine-tuning). Second, [1] showed that with 1% of labels, a standard classifier reaches 22% top-1 accuracy (we reach 32%), so it is anticipated that BYORL is better than UAT when only 1% of the labels are available.\\n\\nMost importantly, we were able to train a larger model (ResNet50-4x with lower color augmentation strength of 0.2 and an EMA decay rate of 0.998) and obtained a robust accuracy of 47.6% when using 100% of labels (against AutoPGD with 100 steps). This surpasses the best known supervised training result which is 47.0% (against PGD-100 and trained using a new regularizer) [2]. Standard adversarial training only reaches 39.7% with 100% of the labels (and UAT is equivalent to standard adversarial training when 100% of the labels are available). Hence, we are confident that overall BYORL will be better than UAT when only 10% or 1% of the labels are available (since it is better in the 100% setting). Note that full evaluations are still ongoing and we expect to update this thread and the paper early next week.\\n\\n> Overall, the empirical results are satisfactory but not entirely convincing.\\n\\nWe hope that our answer (and the answers to other reviewers) provide some confidence in our results.\\n\\n[1] Chen et al., \\\"A simple framework for contrastive learning of visual representations\\\", 2020\\n[2] Qin et al., \\\"Adversarial Robustness through Local Linearization\\\" , 2019\"}",
"{\"title\": \"Clarifications and additional results\", \"comment\": \"Thank you for the detailed review. We appreciate that the main results of the paper were clear. The adversarial low-label regime has not been widely studied and we believe to be among the first to show that, in practice, robust representations can transfer well (even when using standard finetuning).\\n\\n> RoCL [1] somewhat compromises novelty\\n\\nWe have only made recently aware that [1] was accepted to NeurIPS. In fact, at submission time, we only had access to its ArXiv version. This being said:\\n* BYORL is more scalable that RoCL as it is not based on contrastive learning which requires large batch sizes.\\n* The results in [1] are conducted against a PGD-20 attack and, as such, results are not directly comparable. From a high-level view, partial finetuning and transfer results (using adversarial fine-tuning) seem on par (at least when comparing to the CIFAR-10 -> CIFAR-100 transfer against l-inf perturbations).\\n* [1] neither studies the transferability of robust representations (when using standard finetuning), nor the low-label regime.\\n\\nWe believe that both papers are complementary and have highlighted differences in the new revision of the manuscript.\\n\\n> Label budgets do not include validation set used for early stopping\\n\\nWe used this additional validation set for fairness. In fact, the BYORL models do not need to be early stopped (i.e., best checkpoint is often the last checkpoint), but we wanted to give a fair chance to the baseline adversarial training method (which strongly benefits from early stopping) [2]. While 1024 may seem excessive, it is a only a constant offset (e.g., 1% of labels is really 3%, 50% of labels is really 52%), and both methods benefit from this additional data in the same way. We have added more information in the new paper revision.\\n\\n> Clean accuracy for Fig 3/4\\n\\nApologies for not providing these numbers in the original submission (we have now added them to the appendix). Clean accuracy seems consistent (in terms of trend) with robust accuracy. In the low-label regime, BYORL does much better, while in the high-label regime UAT-FT does better.\\n\\n> informative to include the robust accuracy of a model trained directly on STL-10 and CIFAR-100 in Table 2\\n\\nWe did not train models on STL-10 or CIFAR-100 directly. We note that this would be good to have and plan to add this in the near future (each evaluation against AutoAttack+MultiTargeted takes a few days to complete). We do, however, expect models trained directly on STL-10 and CIFAR-100 to be significantly better. As a point of comparison, models trained on CIFAR-100 using adversarial training reach 43.2% robust accuracy [2]. The main message from Table 2 is to demonstrate non-trivial robustness transfer (even when using standard fine-tuning instead of robust fine-tuning).\\n\\n> multiple runs would give an idea of the variance.\\n\\nWe are conscious of this limitation. As each evaluation run takes a significant amount of time, we limited ourselves to a single training run. That being said, Fig. 3 and 4 can help estimate this variance. We observe that runs after 5% of labeled data obtain roughly the same robust accuracy (within 0.5% percentage points).\\n\\n> The paragraph after Eq 8 was confusing to me.\\n\\nWe symmetrize the loss by swapping $v$ and $v'$ and summing both losses. The adversarial attack is always done through the online network. The updated manuscript now contains the complete symmetric loss (denoted $\\\\mathcal{L}_\\\\theta^\\\\textrm{symmetric}$).\\n\\n> Some analysis into the representations learnt would be helpful\\n\\nWe refrained from doing any additional analysis as there are no universally accepted method to analyze such representations. We could produce a t-SNE plot, but did not believe it would give any additional insight (beyond the ones we already have).\\n\\n> Why does the method work better under l2 attacks as compared to l-inf attacks?\\n\\nl2 attacks of size 128/255 are easier for models to handle (as measured through the final clean and robust accuracy). This is true of BYORL as well as classical adversarial training. Exact analysis as to why this is the case and why that impacts BYORL differently from regular adversarial training remains to be done.\\n\\n> What is the effect of different transformations on robustness?\\n\\nAppendix B provides some insights. We evaluate the impact of color augmentation on BYORL and observe that the optimal color augmentation strength is slightly lower for BYORL than it is for standard BYOL. This can be explained by the fact that adversarial examples are a form of augmentation already and it is known that overly strong augmentations scheme hurt the performance of self-supervised techniques [3]. It is entirely possible that other transformations may impact BYORL differently.\\n\\n[1] Kim et al., \\\"Adversarial Self-Supervised Contrastive Learning\\\", 2020\\n[2] Rice et al., \\\"Overfitting in adversarially robust deep learning\\\", 2020\\n[3] Chen et al., \\\"A simple framework for contrastive learning of visual representations\\\", 2020\"}",
"{\"title\": \"Transfer of robustness and clarifications\", \"comment\": \"Thank you for the review. We address the possibility that our models are gradient obfuscated towards the end of this answer. Hopefully, this convinces the reviewer of the quality of our experimental results.\\n\\n> Robust self-supervised pre-training + fine-tuning has been studied in two recent works.\\n\\nThank you for bringing [1] and [2] to our attention. We have added them to the related work section.\\n\\n[1] focuses on combining supervised adversarial training with a self-supervised task without building general representations (both tasks are trained in parallel). In our approach, we first build general representations that are robust and then fine-tune a linear model on top of these representations. We believe our results are complementary as they also highlight robustness transfer to multiple downstream tasks.\\n\\n[2] studies adversarial pre-training on self-supervised tasks (selfie, jigsaw, rotation). Models are for the most part trained on CIFAR-10 and evaluated on CIFAR-10 (or close variants). Fine-tuning is either partial or full. In the partial case, all ResNet blocks following the 3rd block are re-trained. While we also focus on the partial setting, we only train a linear model. To the contrary of [2], we also evaluate how our the resulting representations transfer to new tasks (STL-10 and CIFAR-100); and we also note that their approach does not preserve robustness with standard fine-tuning (as opposed to our method).\\n\\n> In Figure 2, why is additional prediction head $q(\\\\cdot,\\\\theta)$ in BYOL and BYORL needed?\\n\\nWe would like to refer the reviewer to the original BYOL publication [3]. This head is needed by the online network to match the target network despite having to use different weights for the first two stage (as the target network is a slow moving averaging of the online network to avoid collapse).\\n\\n > It is not clear why a very large eps used in l-2 robust training\\n\\n128/255 and 0.5 are standard radii for l2 robustness [4, 5].\\n\\n> In Table 2, it is not clear why l-inf robust pre-training results are missing?\\n\\nWe decided to split the different ablation experiments between l-2 and l-inf. There was no particular reason to focus l2 for this table (more experiments using l-inf are in the appendix).\\n\\n> I supposed that during fine-tuning, the robust representation is frozen, and only the linear classifier is adversarially trained, correct?\\n\\nYes, this is correct.\\n\\n> the results on Robust (BYORL) + Non-robust on CIFAR-10 seem very strong.\\n\\nThank you for highlighting one of the main results of this paper. We believe to be the first to demonstrate that, in practice, it is possible to transfer representational robustness to downstream tasks (without the need to perform robust fine-tuning). As indicated, this result is very strong and worth sharing.\\n\\n> However, [2] founds a different conclusion.\\n\\nAs explained earlier in this answer, there is a rather large difference between our approach and the one in [2]. In [2], the authors fine-tune a deep non-linear model on top of their frozen representations, whereas we only fine-tune a linear model. We will make this distinction clear. In theory, we expect a drop of robust accuracy when using standard fine-tuning. In practice, that drop is relatively minor (at least on CIFAR-10 in the case of BYORL).\\n\\n> I also suggest the authors check if the proposed defense yields obfuscated gradients\\n\\nThis was also our main worry. This is one of the reasons why all figures and tables show the robust accuracy resulting from one of the strongest combination of attacks (AutoAttack and MultiTargeted). We note that AutoAttack also uses a black-box attack (Square). We also ran the suggested evaluation (see below), which did not reveal further signs of gradient obfuscation. The loss landscapes do not exhibit any anomalies either (see Appendix).\\n\\nWe evaluated the model corresponding to the 2nd row in Table 2 against l2 perturbations of size epsilon={1, 2, 3}. We also evaluated the same model against epsilon=128/255 with an increased number of PGD steps. For both evaluations, we combined AutoPGD with cross-entropy and difference of logits ratio losses [6].\\n* eps = 1, number of steps = 100: Robust accuracy = 49.45%.\\n* eps = 2, num steps = 100: Robust accuracy = 7.19%.\\n* eps = 3, num steps = 100: Robust accuracy = 0%.\\n* eps = 128/255, number of steps = 400: Robust accuracy = 77.27% (attack seem to have converged in 100 steps).\\n\\n> if the pre-training task is conducted over CIFAR-10, can the results be further improved?\\n\\nThe pre-training task is already conducted over CIFAR-10 in Table 2. We do observe that robustness transfers well to unseen datasets (STL, CIFAR-100).\\n\\n[3] Grill et al., \\\"Bootstrap your own latent: A new approach to self-supervised Learning\\\", 2020\\n[4] Rice et al., \\\"Overfitting in adversarially robust deep learning\\\", 2020\\n[5] https://robustbench.github.io/\\n[6] Croce and Hein. \\\"Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks\\\", 2020\"}",
"{\"title\": \"Clarifications\", \"comment\": \"Thank you for highlighting that the paper is easy to follow and clear.\\n\\nAs correctly pointed out, we use adversarial training within BYOL. This requires making a few non-trivial choices (explained in Sec. 3.3). Additionally, we also perform a large number of experiments across different threat models and datasets and obtain conclusions that are worth sharing with the wider community. In particular:\\n* We demonstrate that it is possible to obtain high robustness in the very low-label regime, see Fig. 3 and 4.\\n* We also demonstrate that, contrary to popular belief [2], it is possible to preserve the robustness of self-supervised robust representations with standard partial fine-tuning (rather than robust fine-tuning), see Table 2.\\n\\n> The authors also claim that their approach is better than pseudo-labeling training for downstream tasks. Yet, this is not always the case.\\n\\nWe are not sure what the reviewer is referring to when stating that BYORL is not always better than pseudo-labeling for downstream tasks. Table 1 shows that either BYORL matches pseudo-labeling (within less 0.5 percentage points except for a single case) or vastly outperforms it.\\n\\n> The authors also mention that their approach reaches optimal performance with 5% of labels (and seems to deteriorate with more labelled data).\\n\\nWe meant that for all settings, BYORL does just as well with 5% of labels than with 100% of labels. The little fluctuations thereafter are due to using different initializations as models are re-trained from scratch for each setting (fluctuations are smaller than 0.5% which is not uncommon for adversarial training).\\n\\n> There is also a lack of comparison against contemporary SOTA self-supervised approaches.\\n\\nWe compare BYORL to one of the best semi-supervised technique to train adversarially robust models (i.e., UAT) [1,4]. Except for [2], we are not aware of many self-supervised techniques that have been applied to adversarial training (at the time of submission). We did experiment separately with SimCLR but found that the resulting network were prone to gradient obfuscation. We would appreciate if the reviewer could provide some references.\\n\\nIn [2], the authors do not focus on representation learning and, as such, they require deeper non-linear models when fine-tuning. To the contrary of our approach, their approach does not preserve robustness with standard fine-tuning. Finally, we also note that the authors of [2] used PGD-20 for their evaluation and, as such, results are not directly comparable. In fact, their best partially-finetuned model (which fine-tunes a much deeper model, but is the setting most similar to ours) reaches 45.10% robust accuracy against PGD-20 whereas we obtain 46.06% against AutoPGD-100 [3]. We have expanded our related work section in the new manuscript.\\n\\n[1] https://robustbench.github.io/\\n[2] Chen, Tianlong, et al. \\\"Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.\\n[3] Croce and Hein. \\\"Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks\\\", 2020\\n[4] Uesato et al. \\\"Are labels required for improving adversarial robustness?\\\", 2019\"}",
"{\"title\": \"Application based approach, claims not very well supported.\", \"review\": \"This paper proposes some modifications to BYOL (Bootstrap Your Own Latents) in an attempt to address adversarial robustness in low-label high data regimes. The paper is well written and very easy to follow. Overall, the idea is clear and well presented. \\n\\nAlthough the author's claim their approach to be novel, as is, the main contribution of the paper is adding adversarial training to BYOL, which already does not require labels. \\n\\nThe authors also claim that their approach is better than pseudo-labeling training for downstream tasks. Yet, this is not always the case. The authors also mention that their approach reaches optimal performance with 5% of labels (and seems to deteriorate with more labelled data). There is no analysis on why this is the case. \\n\\nThere is also a lack of comparison against contemporary SOTA self-supervised approaches.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review report\", \"review\": \"In this paper, adversarial self-supervised learning is proposed to render robust data representations for down-stream fine-tuning tasks. The core idea is to integrate BYOL with adversarial training. The paper is well written in general. However, I do have several concerns about this submission.\\n\\n1. Robust self-supervised pre-training + fine-tuning has been studied in two recent works at least.\\n\\n[1] Hendrycks, Dan, et al. \\\"Using self-supervised learning can improve model robustness and uncertainty.\\\" Advances in Neural Information Processing Systems. 2019.\\n\\n[2] Chen, Tianlong, et al. \\\"Adversarial Robustness: From Self-Supervised Pre-Training to Fine-Tuning.\\\" Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2020.\\n\\nThe comparison with [1-2] is recommended. \\n\\n2. Unclear algorithm implementation. \\n\\na) In Figure 2, why is additional prediction head $q(\\\\cdot, \\\\theta)$ in BYOL and BYORL needed? Please clarify it. \\n\\nb) It is not clear why a very large eps used in $\\\\ell_2$ robust training, e.g., eps = 128/255. \\n\\nc) In Table 2, it is not clear why $\\\\ell_\\\\infty$ robust pre-training results are missing?\\n\\n3. Not convinced transfer results on unseen tasks. \\n\\nIn the paper, the authors claimed that \\\"For both representations, we train a robust linear model using adversarial training (see subsection 3.1) with a different label availability on STL-10 and CIFAR-100 against `$\\\\ell_\\\\infty$ and `2 norm-bounded perturbations.\\\"\\n\\nThus, I supposed that during fine-tuning, the robust representation is frozen, and only the linear classifier is adversarially trained, correct?\\n\\nIf so, in Table 2, the results on Robust (BYORL) + Non-robust on CIFAR-10 seem very strong. It indicated that the standard partial fine-tuning is able to preserve robustness from self-supervised robust representation. However, [2] founds a different conclusion. Thus, it is important to provide additional explanations and comparisons for the achieved results. \\n\\nI also suggest the authors check if the proposed defense yields obfuscated gradients, e.g., having a plot of robustness versus different attack strength eps during evaluation. \\n\\nLastly, if the pre-training task is conducted over CIFAR-10, can the results be further improved?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"New method, good improvments in low-label regime\", \"review\": \"This paper introduces a new algorithm for learning adversarially robust models in the semi-supervised setting, where a small amount of labeled data is available together with a sizeable unlabeled dataset. The proposed approach BYORL adapts an existing self-supervised learning method BYOL by introducing a new adversarial augmentation technique based on maximizing the cosine similarity between representations. BYORL is evaluated on CIFAR-10 and compared against a recent pseudo-labelling based approach UAT-FT for the semi-supervised setting, and is shown to outperform UAT-FT in terms of robust accuracy under $\\\\ell_2$ and $\\\\ell_\\\\infty$ attacks under the low-labelled data regime. The representations learnt by BYORL are also shown to be better than that of UAT-FT when transferred to other datasets. Finally, robust representations are shown to be more important than learning a robust linear classifier on top.\", \"strengths\": [\"Significant improvements in robust accuracy in the low-label regime\", \"Interesting observations regarding transferability of robust representations and importance of final robust linear classifier\", \"Paper is generally clear and well-written\"], \"weaknesses\": [\"RoCL as introduced by \\\"Adversarial Self-Supervised Contrastive Learning\\\", NIPS 2020 can also be applied to the semi-supervised setting, which somewhat compromises novelty. Ideally this method should be included in the experimental evaluations. This paper was cited but not discussed in the context of related work, but should be.\", \"Label budgets claimed do not include sizeable validation set used for early stopping, which can sometimes exceed the label budget (e.g. result with 500 labeled images also uses validation set of 1024 examples).\", \"Experimental evaluation could be strengthened in a few ways:\", \"Clean accuracy comparison for Fig 3/4 - does UAT-FT do better in terms of clean accuracy?\", \"It would be informative to include the robust accuracy of a model trained directly on STL-10 and CIFAR-100 in Table 2 to show how good the transferred representations are.\", \"Experimental results seem to be from a single run; multiple runs would give an idea of the variance.\", \"Overall, the robust accuracy improvements achieved in the low-label regime by BYORL are significant and the method is somewhat novel. The paper could be further strengthened by improving the experimental evaluation as described above. The highly related work RoCL should be discussed and ideally compared with.\"], \"other_comments\": [\"The paragraph after Eq 8 was confusing to me. What is meant by symmetrizing the loss? What is the final loss used to train the model after symmetrization? And why does the argument about batch-norm statistics not apply to the online network as well if the loss is symmetrized (which I took to mean that both models would be separately attacked and the losses added up)?\", \"Some analysis into the representations learnt would be helpful to give some insight into why the method works.\", \"Why does the method work better under $\\\\ell_2$ attacks as compared to $\\\\ell_\\\\infty$ attacks?\", \"What is the effect of different transformations on robustness? Are the transformations important for robustness different from those important for clean classification?\", \"*** Post Response Comments ***\", \"I thank the authors for addressing the points raised. I am raising my score accordingly to 7.\"], \"nit\": \"The y-axis labels on Figure 7 and 8 should probably say \\\"Clean accuracy\\\" instead of \\\"Robust test accuracy\\\".\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review: SELF-SUPERVISED ADVERSARIAL ROBUSTNESS FOR THE LOW-LABEL, HIGH-DATA REGIME\", \"review\": \"##########################################################################\", \"summary\": \"The paper proposes a new self-supervised technique, Bootstrap Your Own Robust Latents (BYORL), based on an existing technique, BYOL. BYORL proposes to provide adversarially robust representations for low-label regimes. The paper claims that BYORL achieves state-of-the-art performance on CIFAR-10 even with data that is labeled as low as 1%. In fact, the authors highlight that the representations resulted from BYORL avoid the explicit training for adversarial robustness, because they are already robust.\\n\\n##########################################################################\", \"reasons_for_score\": \"Overall, the paper in its current form is above the acceptance threshold. The proposed idea looks very encouraging. Provided the authors address some of the concerns the proposed method has a significant potential to the self-supervised adversarial robustness. If we look at the low-label regime alone, probably the paper has good contributions, that on ImageNet is not entirely convincing. If the authors convince me on that, happy to increase the score to clearly accept. \\n##########################################################################\", \"pros\": \"1. Overall, the paper is well written except for a few minor grammatical errors, easy to follow and understandable. \\n\\n2. The paper discusses the existing literature and positions the proposed approach with respect to the state-of-the-art. The proposed method is definitely a decent contribution towards the field.\\n\\n3. While robustness evaluations are good, the transfer to unseen and without adversarial training are much more encouraging.\", \"cons\": \"1. In abstract \\u201c... pioneered by four separate and simultaneous work in 2019,\\u201d should be \\u201c... pioneered by four separate and simultaneous works in 2019,\\u201d\\n\\n2. \\u201cSince Madry et al. (2017), various modification to their ...\\u201d should be \\u201cSince Madry et al. (2017), various modifications to their ...\\u201d\\n\\n3. As the % of labeled training data increases, there is not a significant improvement in robust test accuracy for CIFAR-10, that increases for CIFAR-100. Why is that happening for CIFAR-10? \\n\\n4. On CIFAR-100, for anything over 10% labeled training data the other methods are still the state-of-the-art, the proposed method does not perform well. It again probably is due to the above observation of robust accuracy not increasing with a steep slope like the other methods. \\n\\n5. What is robust accuracy, Zhang et al. 2019 or Uesato et al 2019 or others have it defined, nevertheless, please say in a sentence what it is.\\n\\n6. On the ImageNet, the hypothesis of BYORL to get better as the labeled data increases is hard to buy given what we saw on CIFAR-10 and CIFAR-100. For those two, BYORL starts better and the state-of-the-art methods either reach BORL or outperform it. In ImageNet case, BYORL is outperformed with 1% itself, unless supported with empirical evidence, it is hard to believe the above statement of improvement.\\n\\n7. Overall, the empirical results are satisfactory but not entirely convincing.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
Jr8XGtK04Pw | Hippocampal representations emerge when training recurrent neural networks on a memory dependent maze navigation task | [
"Justin Jude",
"Matthias Hennig"
] | Can neural networks learn goal-directed behaviour using similar strategies to the brain, by combining the relationships between the current state of the organism and the consequences of future actions? Recent work has shown that recurrent neural networks trained on goal based tasks can develop representations resembling those found in the brain, entorhinal cortex grid cells, for instance. Here we explore the evolution of the dynamics of their internal representations and compare this with experimental data. We observe that once a recurrent network is trained to learn the structure of its environment solely based on sensory prediction, an attractor based landscape forms in the network's representation, which parallels hippocampal place cells in structure and function. Next, we extend the predictive objective to include Q-learning for a reward task, where rewarding actions are dependent on delayed cue modulation. Mirroring experimental findings in hippocampus recordings in rodents performing the same task, this training paradigm causes nonlocal neural activity to sweep forward in space at decision points, anticipating the future path to a rewarded location. Moreover, prevalent choice and cue-selective neurons form in this network, again recapitulating experimental findings. Together, these results indicate that combining predictive, unsupervised learning of the structure of an environment with reinforcement learning can help understand the formation of hippocampus-like representations containing both spatial and task-relevant information. | [
"recurrent neural network",
"place cell",
"hippocampus",
"neural dynamics"
] | Reject | https://openreview.net/pdf?id=Jr8XGtK04Pw | https://openreview.net/forum?id=Jr8XGtK04Pw | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"VBG346qyt7t",
"pZz851JhQ3z",
"m3PM2PItHmM",
"G3rSGNX-K8O",
"BAYI2FNNM_",
"uAqsgTFsP6u",
"B41a73EcDHW",
"p9hQ8XzEi0b",
"FlPCv756lWC",
"uxPIryRVS2t",
"DxXVAWWHKeK",
"_Sf3NTtx0zd",
"rNajhMVKXIu",
"pOReP3ueqT",
"lF-NjoDyDrY"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040437731,
1606138712413,
1606138589772,
1606138514524,
1606137313935,
1606137280033,
1606136981370,
1606136932087,
1606136556623,
1606136426250,
1606135347266,
1603888299523,
1603874943647,
1603822317193,
1603560959957
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3712/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3712/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3712/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3712/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3712/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3712/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3712/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3712/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3712/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3712/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3712/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3712/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3712/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3712/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper analyses a recurrent neural network model trained to perform a simple maze task, and reports that the network exhibits multiple hallmarks of neural selectivity reported in neurophysiological recordings from the hippocampus\\u2014 in particular, they find place cells which also are tuned to task-relevant locations, cells which anticipate possible future paths, and a high proportion of neurons tuned to task variables.\\n\\nThe reviewers appreciated the interesting empirical analysis, and the demonstration that multiple such features could arise in the same neural network\\u2014 to the best of my knowledge, this had not been demonstrated explicitly before. However, there were also multiple concerns, which lead to this paper beeing discussed extensively and controversially. In particular, it is not clear which features arise from which learning objective, for example, for place cells to arise, do we just need sensory prediction, or do we need q-learning? In addition, there were some points in which the tightness of the analogy between model and biology is questionable\\u2014 in particular, this refers to the comprising between hippocampal recordings and the evaluation of the network. Finally, it is also clear that some of these observations reported in the paper are, indeed, empirical observations rather than explanations. Because of these shortcomings, there was no consensus and strong support from the reviewers for acceptance of the paper.\\n\\nAfter extensive discussion between both the reviewers, the AC and the program chair, the final decision was to not accept the paper. We do hope that the reviews will help you in improving the study and its presentation. It clearly has potential to be a valuable contribution to the literature.\"}",
"{\"title\": \"Response to reviewer 4 (part 3)\", \"comment\": \"3)\\n\\n* \\u201cFinally, a more general [...] overtrained rat.\\u201d\\n\\nWe find that these RNN dynamics are observed under a wide range of initial conditions, training hyperparameters and network sizes, and are recapitulated in a similar fashion in every instance that the network solves the reward task perfectly (without backtracking) and importantly, when the network is trained with the combined loss (Eq. 2) - see explanation on page 7 in the updated manuscript. Thus we believe these results are particularly enlightening with regard to hippocampal dynamics. We have updated the paper showing the variety of network sizes and initial conditions where the reward task is solved (Figure 3 and page 4). \\n\\nWe find the reviewer\\u2019s alternative hypothesis suggestion intriguing, it is indeed possible that sweeping is simply a consequence of recurrent dynamics that resulted from learning the task. We are not aware of published experimental data which could help dissociate a purely dynamical phenomenon from higher level functions such as planning (or if these two are indeed similar in nature). This is an interesting direction where this model could help formulate testable predictions.\", \"minor_concerns\": \"* \\u201cAs such, a network of Gated Recurrent [...] between choice and reward?\\u201d\\n\\nThere are 5 steps between the cue and choice points, with 7 steps between the choice point and the first reward location on either return maze arm. We have added these details in the updated manuscript on page 3. The LSTM has a cell state which can be trained to be consistent between timesteps and we believe this is crucial for maintaining the cue modulation in network memory for many timesteps.\\n\\n* Related to the previous point: \\u201cWe attempted to run [...] observations (e.g. 2 sequence problem in Hochreiter and Schmidhuber, 1997).\\n\\nWe thought this might be the case too (which is why we tested it) but the LSTM is unable to use step information in this way to solve the task. This is due to random actions being taken at all three choice points (primary and two secondary) during epsilon-greedy reward training - this causes the timestep number between choice points and prior taken actions to be inconsistent in various maze traversals during reward training.\", \"details\": \"* It would be good to clarify what exactly is the action set of the agent.\\n\\nThe LSTM predicts Q-values relating to four agent actions, movement either up, down, left or right (cardinal directions). It predicts all four Q-values regardless of whether they can be used at a given position or not - the allowed action with the highest Q-value is chosen at each step when the action is not chosen randomly. We already mention that the network predicts state-action values associated with the four cardinal directions in relation to the agent\\u2019s current position and direction. We have updated the manuscript mentioning the network chooses optimal actions of those available.\\n\\n* I would like to know how exactly are activity maps obtained. The authors mention \\u201cPlace fields determined by contiguous locality with average activity exceeding 30% peak unit activity during a single left trajectory followed by a right trajectory\\u201d but I don\\u2019t find this particularly clear. I also found it difficult to understand the bottom row of Fig 3.\\n\\nActivity maps are obtained by running the agent and collecting activity from the start point of the maze through a left sided trajectory back to the start point, followed by a right sided trajectory (with cues presented) back to the start point. This is what we show in the top row of figure 3 (Figure 4 in updated manuscript). We then denote place fields as areas of firing where the aggregated activity exceeds 30% of the peak activity level for each particular LSTM unit - the dotted regions in the top and bottom rows of figure 3 (Figure 4 in updated manuscript).\\n\\nFor the activity maps in the bottom row of figure 3 (Figure 4 in updated manuscript), we run the agent from the start point of the maze to the top of the central maze stem (presenting a left side cue at the cue point) and pause the agent here feeding the LSTM the same observation input for many timesteps. Here we show activity exceeding 60% of the peak activity level for each LSTM unit in order to emphasise that this extrafield firing phenomenon occurs at a significant level. This is shown in addition to the previously identified place fields (as in the top row).\\n\\nWe have clarified this in depth on page 5 of the updated manuscript.\\n\\n* Fig 5 should be referred to in the paragraph starting with \\u201cIn stark contrast to the dynamics of the LSTM network following predictive pre-training...\\u201d\\n\\nWe thank the reviewer for this suggestion and we have added this reference on page 7 of the updated manuscript.\\n\\n\\n* Why is the return to start representation in Fig. 6 different for right and left trajectories?\\n\\nWe believe this is due to left and right trajectories having somewhat dissociated within the LSTM representation after reward training.\"}",
"{\"title\": \"Response to reviewer 4 (part 2)\", \"comment\": \"2)\\n\\n* \\u201cThe pre-train stage is done on trajectories that correspond [...] correct turn (which corresponds to one of the two actions).\\u201d\\n\\nThe network here must readapt connections in order to predict Q-values rather than RGB values, therefore it must learn by RL. In addition, the actions taken during Q-learning are initially completely random (both at the primary and secondary choice points) and so it must learn by RL entirely to gain perfect performance on the task, as the actions become more on policy over subsequent iterations (epsilon-greedy).\\n\\nImportantly, we show in Figure 3 and on page 4 of our updated manuscript, that Q-learning alone with pre-training learns an altogether different strategy for reaching rewards, contrasting with the case using the combined loss (Eq. 2, this was Eq. 3 in the original paper). Q-learning alone seems to cause the network to completely dissociate from its pre-trained spatial map representation whereas training with the combined loss seems to cause the network to utilise this spatial map effectively to solve the reward task perfectly and in relatively few iterations (Figure 3).\\n\\n* \\u201cSecond, the LSTM is exposed only [...] of the environment (a la Tollman).\\u201d\\n\\nIt is not possible to pre-train the agent in the way the reviewer suggests, in a free random exploratory manner. This is due to the structure of the maze and our training paradigm. The complete uncertainty of turn direction when the agent enters or leaves the central maze stem when performing this exploratory random walk method of pre-training would render the network unable to converge with sufficient predictive performance and unable to form a latent spatial map of the maze environment. Our training paradigm consists of the agent only predicting the subsequent visual stimulus based on the previous visual stimulus and does not receive any spatial or velocity information and is not informed of the last action taken either. Therefore pre-training with random movements would be impossible to learn.\\n\\nImportantly, if we pre-train and then train the agent on the reward task using Q-learning alone (and no wall prediction), the representation does not move forward when the agent is paused at the top of the maze. We have updated the manuscript stating this explicitly on page 7. Thus the agent sweep cannot be said to be following these set trajectories. Notably, if we do not pre-train the agent, and train the agent only on the reward task with the joint Q-learning and wall colour prediction loss (Eq. 2), the network representation still recapitulated the same forward sweeping behaviour shown in figure 5 (Figure 6 in the updated manuscript) - thus this behaviour (after reward training) is dissociate from pre-training.\\n\\n* \\u201cOn the other hand, [...] phase (Johnson & Redish, 2007).\\u201d\\n\\nWe believe this sort of pre-training (including the Q-learning component) with rewards already being presented would make the task far too easy as the network will have a heavy bias when the reward based training is commenced, therefore we do not see this as conducive to important insights.\"}",
"{\"title\": \"Response to reviewer 4 (part 1)\", \"comment\": \"We thank the reviewer for their careful reading of our paper and for their scrupulous evaluation of our results. We have made substantial updates and alterations to the manuscript to address the reviewer\\u2019s comments and we respond to feedback below.\", \"cons\": \"1.\\n\\n* \\u201cOne of the main results of the paper [...], repeating the same constant observation.\\u201d\\n\\nThis is what the rodent is doing experimentally in (Johnson & Redish, 2007), and we replicate this experiment directly, so we feel prudent to follow this as a reasonable simulation for subsequent comparison of hippocampal characteristics. In page 4 of the original manuscript we state that, in experiments, rodents seem to pause at high consequence decision points (Johnson & Redish, 2007) with alternating head movement behaviour signifying vicarious trial and error (VTE) (Muenzinger, 1938; Hu & Amsel, 1995).\\n\\n* \\u201cThe LSTM was trained on trajectories on the maze, [...], which explains why the correct sequence is followed.\\u201d\\n\\nAs the reviewer states, the activity of the network is difficult to interpret in this situation but we agree this is a plausible interpretation of network dynamics after pre-training and may in fact be what is occurring experimentally.\\n\\n* \\u201cIn the network trained also by RL in particular, the authors interpret this as [...], nor any particular planning mechanism.\\u201d\\n\\nWe agree that this may be an over-interpretation as the agent has no active sampling capability. Our wording here is ambiguous and we have removed references to agent sampling in the updated manuscript. However, sweeping behaviour (as shown experimentally) by Johnson & Redish is undoubtedly occurring.\\n\\n* \\u201cAlternatively, the authors may be claiming [...] quantify it explicitly.\\u201d\\n\\nThe path switching behaviour does only occur after reward training and not before, we have further emphasised this in the paper on page 7. We have updated the paper to indicate that path switching occurs reliably in a very similar way after reward training with differing numbers of LSTM units and initial conditions as long as the reward task is solved without backtracking at secondary cue locations and is trained on the combined loss (Eq. 2, this was Eq. 3 in the original paper). Further analysis of this can be seen in Figure 3 which we introduce in the updated manuscript.\\n\\n* \\u201cAlthough in this case, [...] out-of-distribution case.\\u201d\\n\\nWe would argue that the network is more able to rely on the cue during reward training due to the spatial map of the maze formed during pre-training. Exposure to the wrong-arm combination is not sufficient for the network to exhibit the path switching behaviour shown in Figure 6 for various reasons. Firstly, the network converges to a solution which does not include backtracking at secondary points, therefore the network is choosing actions at the choice points which leads the agent to take a direct path to reward locations. Thus path switching is not part of the network\\u2019s inherent behaviour. Secondly, in the out of distribution case when the agent is paused at the primary choice point, the instantaneous representation jump from the rewarding maze arm to the opposing maze arm is not occurring anywhere in training, even at the beginning of epsilon-greedy reward training when actions are chosen completely randomly. Lastly, simply being out of distribution cannot possibly explain the secondary path switch shown at timestep 32 in Figure 6.\\n\\n\\n* \\u201cIn the discussion section, [...] the biological case.\\u201d\\n\\nWe agree that this may be an overclaim and we have removed suggestions of active planning in our updated manuscript, however stark similarity in terms of dynamics to the experimental case is a significant insight we believe. We thank the reviewer for the idea of having agent induced pausing as additional trainable behaviour which would certainly be closer to the biological case. We believe this would be too great a change to the current work as a revision but we will certainly explore this in future work.\"}",
"{\"title\": \"Response to reviewer 3 (continued)\", \"comment\": \"Minor comments:\\n\\n* \\u201cRecurrence based\\u201d should be changed everywhere to \\u201crecurrent\\u201d (e.g. on pg. 1), and \\u201cNeuroscience\\u201d is not capitalized. The motivation to use double Q learning should be expanded on in pg. 3 prior to equation 3.\\n\\nWe thank the reviewer for pointing out these errors and these have been corrected in the updated manuscript. We have also motivated the use of double Q-learning in the updated paper on page 4.\\n\\n* Question: The authors mention that Q-learning performs poorly on tasks in dynamic environments \\u2013 however, I do not see any evidence of this in the paper, it would be imperative to show this explicitly for the environments they consider. Suppose this is in fact the case, could you clarify what makes your approach more successful at this task than others? Is it because of the pretraining to predict the subsequent observation of wall colors from the current wall color observations?\\n\\nWe thank the reviewer for highlighting this (as also done by reviewer 2). \\nIn the paper we state that Q-learning alone cannot solve the task after the network has been pre-trained on the predictive task. This is the case when using the rate of epsilon decay (in epsilon-greedy RL training) we use for the joint Q-learning and wall colour prediction training (Eq. 2, this was Eq. 3 in the original paper). We have run more rigorous analyses of running Q-learning alone with the same network without pre-training, Q-learning alone with pre-training and the combined Q-learning and colour prediction loss with (method presented in paper) and without pre-training. These analyses can be seen on page 4 and in Figure 3 of our updated manuscript. In essence, training with a combined loss (Eq. 2) after predictive pre-training converges with far fewer training iterations than any of the other cases. This shows that predictive learning builds efficient representations that contain relevant task variables, without including these explicitly in the optimisation procedure.\\n\\n* As it stands, I think the ideas of this paper are interesting and think it unifies prior approaches, but I do not think the conclusions from the modeling add all that much novel insight from prior approaches. Therefore, I recommend a weak accept.\\n\\nWe thank the reviewer for their recommendation but would like to stress again that our model is markedly different from previous approaches, and reproduces key aspects of hippocampal dynamics which have not yet been shown in a neural network.\"}",
"{\"title\": \"Response to reviewer 3\", \"comment\": \"We thank the reviewer for their feedback and overall positive evaluation of our work. However, we believe the reviewer may be unfamiliar with the previous work in this area. We address the points made below and in our updated manuscript.\", \"weaknesses\": \"1. The primary conclusion, namely, training an RNN on a maze-like environment gives you place cells, is really not all that new, especially considering that the network is still supervised to predict position and landmarks.\\n\\nWe think the reviewer is incorrect here, to the best of our knowledge training an RNN on an environment with emerging place cells has only been shown once before (Recanatesi et al., bioRxiv 2019) and for the first time in a maze-like environment in this work. Our main result is not purely the emergence of place cells, but the combination of predictive and reinforcement learning to solve a navigation task which results in network units displaying hippocampal neuron characteristics.\\n\\nIt is important to point out that the network in our model is not supervised to predict position or landmarks at all, it is only instructed to predict subsequent visual stimuli in pre-training and extended to predict global reward in Q-learning. We think the reviewer may have been misled by this line in our discussion section: \\u201cThis is similar to the purely contextual input received by the model pre-trained by Xu & Barak (2020) where no velocity input is given, however, the network here is still trained on position and landmark prediction in a supervised way.\\u201d\\n\\nTo clarify, here we are referring to the work by Xu & Barak, not our work. We have updated our paper to make this clearer.\\n\\nOur hypothesis was that predictive learning is well suited to be combined with reinforcement learning (we add a prediction of the Q-value for reward training). We show that the resulting representation not only allows efficient learning but also yields cells with properties observed in the hippocampus. It is important to stress that previous work where training was based on trajectories of velocity and position did not report such behaviour and only demonstrate the formation of activations with the form of entorhinal cortex grid cells which are used in navigation differently to place cells. These authors also did not analyse resulting dynamics.\\n\\n2. Given that lack of novelty in the modeling conclusion, it would have therefore been nice to have seen more quantitative comparisons to hippocampal recordings in rodents. Does their approach explain more variance in these neurons than prior approaches? Otherwise, it seems that they simply recapitulate prior qualitative comparisons.\\n\\nWe agree with the reviewer that direct quantitative comparisons would be interesting yet they may be misguided given the significant differences between neural and neuronal networks. We believe in this context our main novel contribution is the demonstration that the combination of predictive and reinforcement learning results in dynamics also observed in the hippocampus, specifically we show extrafield firing of network units at locations outside of their apparent place fields (Johnson & Redish, 2007), non-local forward sweeping representation of the network (Johnson & Redish, 2007), place fields drifting towards reward locations throughout training (Lee et al., 2006), a high proportion of units with place fields at the maze start location encode reward locations (Ainge et al., 2007) and that a higher proportion of units encode task phase than turn direction (Griffin et al., 2007). These specific comparisons between RNNs and hippocampal neurons have not been demonstrated before as far as we are aware.\"}",
"{\"title\": \"Response to reviewer 2 (continued)\", \"comment\": \"Cons... :\\n\\n4. Threshold for place cells are defined? Why 30%? How many units show firing above that threshold? \\n\\nThe reviewer is correct in pointing out that this threshold is arbitrary but is not clear on how this threshold is applied - we have updated the paper (page 5) to clarify this.\\nA threshold of 30% is used to mirror the threshold used experimentally by Johnson and Redish at a per place cell level. It is a threshold placed at a per LSTM unit level in our work, so we denote a unit\\u2019s place field as the unit activity above 30% of the peak activity of each particular unit - these are shown in dotted regions in figure 3 (now figure 4 in the updated manuscript). Every LSTM unit has a different firing threshold. \\n\\nWe obtain the activity maps in the top row of figure 4 through the collection of unit activity from a full left sided trajectory from the start point returning to the start point with cues presented together with a full right sided trajectory. We show all activity from this activity collection in the top row of figure 4 and proceed to outline the areas in each unit with activity higher than 30% of the peak activity of that particular unit. \\n\\nIn the activity plots in the bottom row of figure 4, we only show activity above 60% of unit peak activity (identified with the previously collected activity) when the agent is run from the start point to the top of the central maze stem and paused, with a left cue presented halfway up the maze stem, shown in addition to the previously identified place fields.\\n\\n5. Cherry picked runs or are these averaged across several testing runs? Singular or plural observations? 0ed input to simulate pondering?\\n\\nThe reviewer notes an important point on the reliability of the forward sweeping and path switching phenomena shown in figures 4 and 5 (now figures 5 and 6 in the updated manuscript). Reviewer 1 has also requested clarification on this point. Forward moving behaviour of the network representation (as seen in Figure 5) occurs after pre-training consistently with regards to initial conditions and the number of units in the network. Path switching behaviour exhibited in Figure 6 arises in every instance where the network converges and solves the reward task with a combined loss (Q-learning and predictive), and without backtracking at secondary choice points. This occurs robustly with different network sizes above 380 units and initial conditions as long as the target Q network is updated every 15 or so iterations and the discount factor is low enough so the network does not converge on solutions which include backtracking.\\n\\nWe thank the reviewer for pointing out this erroneous caption and have corrected this in the paper. Indeed the network is fed the same visual input while the agent is paused and analysis is performed identically in both figures. We appreciate the reviewer\\u2019s suggestion to completely zero this input to simulate pondering - we find intriguingly comparable results in this case and have updated the paper (page 8) to include this outcome.\", \"minors\": \"1. Why 380 LSTM units instead of 256 or 512? Is this number affecting the representations?\\n\\nWe found that the network converges and solves the reward task without backtracking at secondary choice points with 380 or more units, and we used this size (380) to show the efficacy of the model with as few LSTM units as possible. We show our analysis on convergence in Figure 3 in our updated manuscript. The representations and behaviours emerging as a result of training (figures 3, 4 and 5) (now figures 4, 5 and 6) are highly comparable with different network sizes above 380 as long as convergence without backtracking at secondary points is achieved (which generally does not occur with a network smaller than 380 units). We clarify these points in our updated manuscript on pages 4, 5 and 7.\\n\\n2. It would have been nice to have analysis to support the following sentence on page 4: \\u201cGenerally, when the network loses its ability to self-localise the agent, state-action values are no longer reliable indicators of future reward potential as the current environmental state is not clearly discernible\\u201d.\\n\\nWe agree with the reviewer that this sentence is ambiguous and have decided to remove it in our updated manuscript. We instead opt for more analysis on different training regimes and how these are affected by pre-training - we have added this on page 4 and Figure 3.\"}",
"{\"title\": \"Response to reviewer 2\", \"comment\": \"We thank the reviewer for their feedback and for being transparent with regards to their evaluation. We have clarified the points made and have made substantial updates to the manuscript to address the reviewer\\u2019s comments.\", \"cons\": \"1. Wall colours fixed or variable, Q-learning objective solving this task, steps between the cue and the reward and discount factor chosen (0.8):\\n\\nWe very much agree with the reviewer that this is ambiguous in our current explanation and we have updated the paper (page 2), explicitly stating that the wall colours remain fixed throughout training. In the paper we state that Q-learning alone cannot solve the task after the network has been pre-trained on the predictive task. This is the case when using the rate of epsilon decay (in epsilon-greedy RL training) we use for the joint Q-learning and colour prediction training (Eq. 2 - Eq. 3 in original paper) described in the paper. \\n\\nWe have run more rigorous analyses of running Q-learning alone with the same network without pre-training, Q-learning alone with pre-training and the joint Q-learning and colour prediction loss (Eq. 2) both with pre-training (shown in the paper) and without pre-training. We have updated the paper (Figure 3, page 5) to summarise these results. Taken together, training with a joint loss after pre-training converges with far fewer training iterations than the other cases. \\n\\nWe have previously run analyses with differing discount factor values and find that a value higher than 0.8 regularly causes the model to converge without taking the most direct route to reward locations (i.e converging on solutions with backtracking at secondary choice points when discount factor is higher than 0.8). \\nThe number of steps in a single training episode is 30 (if the agent takes the most direct path without backtracking). There are 5 steps from the cue to the choice point with 7 steps from the choice point to the first reward site. We have updated the paper (page 4 and page 3) explaining the discount factor used and the number of steps inherent in the task.\\n\\n2. Loss_rgb fine-tuned while training with the Q-learning objective:\\n\\nWe agree this point is ambiguous and we have clarified this in the paper (page 4). The predictive pre-training task where loss_rgb is optimised converges completely. It is not fine-tuned while training the Q-learning objective to navigate to reward locations, but is required so the non-metric representation of space is maintained during learning. When loss_rgb is not included during Q-learning, the place fields are lost, and the training converges much slower to a different solution without an explicit representation of space.\\n\\t\\n3. Learning rates and sizes of linear layers used in eq. 1 and 2:\\n\\nWe have updated the paper to include the following details on pages 2, 3 and 4 of the manuscript. The linear layers in eq. 1 and eq. 2 (Eq. 3 in the updated manuscript) are simply single layered readout layers for the LSTM. To clarify, when we have a 380 unit LSTM, the shapes of these readout layers are 380 x 12 (when predicting four RGB wall colours) and 380 x 4 (when predicting Q-values) respectively. We use a learning rate of 0.0005 with an Adam optimiser for reward training, as we find this learning rate gives good convergence (without backtracking at secondary points) with a greater range of training hyperparameters and initial conditions than with a learning rate of 0.001. We use a learning rate of 0.001 for pre-training.\"}",
"{\"title\": \"Response to reviewer 1 (continued)\", \"comment\": \"Clarification Questions... :\\n\\n3. Reliability of path switching phenomenon:\\n\\nThis is an important point which we should have touched on in the paper. This phenomenon is reliably observed from different initial conditions and network sizes, and is apparent when reward training of the network converges using a combined loss (Q-learning and wall prediction together - Eq. 2) and solves the task perfectly (i.e no backtracking at secondary cue/choice points). The network generally converges in this manner as long as the target Q predictor network is updated around every 15 iterations and the network contains more than 380 units (see Figure 3). We have updated the paper (page 7) reflecting on the reliability and conditions of the path visiting phenomenon and look at convergence with differing network sizes (page 4).\\n\\n4. Consequences of second and third paragraphs in discussion:\\n\\nIn these paragraphs we aim to emphasise why our training paradigm differs from previous work and why this difference may lead to the resulting dynamics shown in the paper. Previous work (those with emerging grid cells as a result of training) consists of recurrent networks which are given positional and velocity information and this culminates in responses of the units similar to grid cells. This sort of training paradigm allows for a straightforward emergence of metric representations such as grid cells, however our model only receives visual stimulus and only learns to predict the subsequent visual stimulus, the inputs do not contain any spatial information. We then extend this to a model which predicts subsequent visual stimulus and Q-values from the same input, and thus propose that the hippocampus generally functions as a predictive network. We have extended the text to further clarify these points.\", \"additional_feedback\": \"1. Discussion on what we learned about the function of the hippocampus from this model, and model predictions about neural activity in hippocampus.\\n\\nOur main focus in this paper is on the learning architecture we propose, and how this not only efficiently learns to solve a RL task, but also reproduces several properties of hippocampal place cells. For further clarification, please also see our replies above. We agree it would be interesting to use this model to generate testable predictions, and have added some thoughts on this in the discussion.\\n\\n2. Oscillations and phase-precessions phenomena in the model:\\n\\nThe LSTM network we use and artificial neural networks in general are not based on spiking communication, therefore we could not observe this phenomena in our model, however we will try with spiking networks in future work. As it stands, our model is not capable of producing oscillations and the chunking of activity into theta cycles, therefore it is difficult to address the question whether it exhibits a phenomenon similar to phase precession.\\n\\n3. Network used to simulate difficult experiments and understand how future paths exploration works in an open-field setting:\\n\\nThe reviewer touches on an important extension to this work which we are already looking into. We use this model to train an agent on an open arena to navigate towards reward locations, pre-training the agent to perform random walks in the open arena and predicting subsequent visual stimuli as in this work beforehand. We observe the formation of well isolated singular place fields in the majority of network units and we will next explore the dynamics of the trained network to attempt to understand path exploration.\\n\\n4. First sentence of the abstract is difficult to understand:\\n\\nWe agree with the reviewer and we have divided this sentence into 2 parts in the updated manuscript. We believe this summarises the goals of the paper well.\"}",
"{\"title\": \"Response to reviewer 1\", \"comment\": \"We thank the reviewer for their careful reading and positive evaluation of our paper, in addition to the constructive feedback. We have clarified the points raised and updated the manuscript accordingly.\", \"weak_point\": \"Using our model to improve understanding of the hippocampus and make testable predictions:\\n\\nWe fully agree that this would be the ultimate goal of our model, to improve understanding of the brain with a relatively small RNN model which can be used to test hypotheses regarding hippocampal dynamics. In this work we aim to show that the LSTM dynamics resulting from training the network using the combination of predictive and reinforcement learning on the maze reward task, mirrors hippocampal neuronal dynamics found experimentally, and as such provides evidence that the underlying learning rules may be similar in the hippocampus. Specifically, we show that non-metric attractors form in the activation space of our network units in the way of place cells, we show extrafield firing of these units at location outside of their apparent place fields, non-local forward sweeping representation of the network, place fields drifting towards reward locations throughout training, a high proportion of units with place fields at the maze start location encode reward locations and that a higher proportion of units encode task phase than turn direction. As far as we know, this is the first model to replicate all these behaviours, and as such provides evidence that the underlying learning rules may be similar in the hippocampus. We therefore expect that our model can be used to generate new predictions in other tasks, and are currently working on this question. We have added a section on how our model could improve hippocampus understanding in the discussion section of the paper.\", \"clarification_questions\": \"1. Role of the secondary cue:\\n\\nThe reviewer makes an important point on the necessity of the secondary cue points in this task. Our overall aim is to compare our trained model\\u2019s dynamics with that of hippocampal neurons as captured using experimental data. To this end we aim to mirror these experimental set ups as closely as possible - in this case we mirror the maze set up of Jonson and Redish, 2007 in order to optimally compare our resulting dynamics to that of hippocampal neurons. In terms of the task, including secondary cue/choice points gives the agent the opportunity to backtrack on its decision made at the primary choice point in light of further environmental observation (the presentation or lack thereof of the secondary cue). We believe that experimentally, the presence of secondary cue points gives rise to some of the extrafield firing observed by Johnson and Redish as the rodent reevaluates its prior primary choice at a secondary cue point, resulting in the firing of place cells with place fields on the opposing side of the maze. We have updated the paper (page 3) justifying the inclusion of the secondary cue.\\n\\n2. Metric and non-metric representations:\\n\\nMetric representations relate to a Euclidean spatial map of an environment and are biologically akin to grid cells in the entorhinal cortex, whereas non-metric representations relate to associative landmark maps of the environment and are comparable to place cells in the hippocampus. We feel this is adequately explained in the introduction.\"}",
"{\"title\": \"Statement to all reviewers: Paper revision uploaded with new analyses and improved explanations\", \"comment\": [\"We thank all four reviewers for their attentive reading of our paper and for their positive evaluations and constructive feedback. We trust that we have addressed all comments raised by reviewers in our individual replies below, and we have run the following analyses to further clarify concerns:\", \"We have conclusively compared the training time of four LSTM training paradigms: the combined loss shown in the paper (Eq. 2), with and without pre-training and Q-learning loss alone with and without pre-training (Figure 3 and page 4).\", \"We have tested the reliability and required conditions of the path switching phenomenon shown in Figures 6 and 7.\", \"We have compared the training time and convergence rate for the pre-trained combined loss model with different network sizes (Figure 3) and also tested the reliability of path switching in these resulting representations.\"], \"in_addition_we_have_made_the_following_major_changes_to_the_paper\": \"* Justification for the inclusion of secondary cue points in reward training (page 3).\\n*Explanation for the use of double Q-learning and justification for use of a discount factor of 0.8 (page 4).\\n*Explanation for the inclusion of the predictive component in the combined loss function (page 4).\\n*Vastly improved explanation of Figure 4 extrafield activity map generation (page 5).\\n*Explanation of required minimum training conditions for emerging path switching behaviour of network representation (page 7).\\n*Analysing the effect of zeroing visual input while the agent is paused at the choice point (page 8).\\n*Discussion on how our model could be used to improve the understanding of the hippocampus (page 9).\\n\\nWe have uploaded an updated manuscript with these analyses and alterations, and with all reviewer comments answered. The newly performed analyses do not change the conclusions of the paper, but we think both strengthen and extend them. We\\u2019d also like to reiterate the primary outcomes and novelties of the paper:\\n\\n* We introduce a novel training paradigm combining predictive and reinforcement learning which converges in far fewer iterations on a reward task after predictive pre-training vs. reinforcement learning alone.\\n* This training paradigm replicates key observations from hippocampal place cells:\\n 1. Non-metric attractors form in the activation space of our network units in the way of place cells, uniformly covering the maze environment.\\n 2. Extrafield firing of these units at locations outside of their apparent place fields.\\n 3. Non-local forward sweeping representation of the network.\\n 4. Place fields drifting towards reward locations throughout reward training. \\n 5. A high proportion of network units with place fields at the maze start location encode reward locations.\\n 6. A higher proportion of network units encode task phase than turn direction. \\n\\nAs far as we know, this is the first model to replicate these behaviours in a neural network.\"}",
"{\"title\": \"A compelling model of some hallmark properties of the hippocampus\", \"review\": \"In this paper, the authors train a recurrent neural network on a navigation task, and observe the emergence of several phenomena reminiscent of the hippocampus: appearance of place cells with a secondary receptive field at task-relevant locations; anticipation of possible future paths in the activity of the model, with alternation in time between possible future paths; a high proportion of neurons tuned to task variables rather than animal trajectory.\", \"strong_points\": [\"these findings are compelling, they account for some hallmark properties of the activity of hippocampus, and they could lead to a better understanding of the role and function of the hippocampus.\", \"the experiments are rigorous and convincing.\"], \"weak_points\": [\"some technical aspects of the paper could be clarified (see below)\", \"it is unclear how this model improved our understanding of the hippocampus function, and whether the model makes any testable predictions about the hippocampus.\", \"I recommend to accept this paper because of its strengths listed above.\"], \"clarification_questions\": \"1) I did not understand the role of secondary cue point. Why were these required in addition to the primary cue point and choice point?\\n2) vocabulary: What are \\\"metric\\\" and \\\"non-metric\\\" representations?\\n3) How reliable is the alternative path visiting phenomenon? Can this phenomenon be observed reliably in networks trained from different initial conditions?\\n4) I did not understand the consequences/take-homes of the second and third paragraph discussions.\", \"additional_feedback\": \"1) It would be interesting to see a discussion on what we learned about the function of the hippocampus from this model, and/or what predictions this model makes about neural activity in hippocampus. \\n2) Are there any oscillations and phase-precessions phenomena in the model? If not, it would be interesting to discuss why these oscillations might be present in the brain but not in the model.\\n3) Could this network be used to simulate difficult experiments, e.g. understand how future paths exploration works in an open-field setting?\\n4) The first sentence of the abstract is difficult to understand. In general, shorter sentences could improve clarity.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting findings, bad explanation.\", \"review\": \"Summary:\\n\\u00a0\\nThe authors trained a recurrent network to perform a sensory prediction task and this gave rise to units that resembled hippocampal place fields. Then they augmented the network with a Q-learning objective and shown that the activity in the network sweep forward in space if the agent is fixed at a decision point. \\n\\n##########################################################################\", \"reasons_for_score\": \"Overall, I think the paper should be rejected. The work is interesting, but the clarity of the paper is not at the level of the findings. I think the authors should enhance exposition, and strengthen some analysis. The findings are really interesting ,but at the current stage I don\\u2019t think the paper is ready to be published. However, I\\u2019m happy to revise my score if authors addresses my comments. \\n\\n##########################################################################\", \"pros\": \"1. Combination of unsupervised learning and RL to show that units in a recurrent network can be used to understand spatial and non-spatial firing patterns in the hippocampus\\n\\n2. Figure 4 and 5 are convincing.\", \"cons\": \"1. While explaining the task the authors claim that the colours are chosen at random, however it is not clear whether these stay fixed across episodes or they changed. Intuitively it looks like, once the colour are generated, then they stay fixed, but in this case it is difficult to understand why a simple Q-leaning objective won\\u2019t be able to solve this task (as claimed by the author). The only reason I can think about is if the steps between the cue and the reward are longer that the ones allowed by the discount factor chosen (0.8). However, in the paper there are no details about the number of steps or how the discount affect the results. I think this is a serious issue.\\n\\n2. The authors claim that they first pre-train on the predictive task, but then in the loss of eq. 3 they report a combined loss. Does it mean that the loss_{rgb} is also fine-tuned while training with the Q-learning objective? This point need clarification. \\n\\n3. The paper doesn\\u2019t report any details about the learning rates or the sizes of the linear layers used in eq. 1 and 2. This way is impossible to replicate this results. This is a serious issue.\\n\\n4. How the threshold for place cells are defined? Why 30%? How many units show firing above that threshold? This needs further analysis to support the decision, which otherwise seem very arbitrary. \\n\\n5. Are figure 4 and 5 just cherry picked run or are these averaged across several testing runs? Also in the captions of these figures the authors are using the singular \\u201cobservation \\u201d in figure 4 and the plural \\u201cobservations\\u201d in figure 5. Does it means that the analysis have been performed differently? Or is it just a mistake?. This is an important point as I think the results will be more powerful with the same image fed as input, or even better with no image, just with 0ed input to simulate pondering.\", \"minors\": \"1. 380 units seems quite an unconventional number of units, why not 256 or 512? Can you please explain. Also is this number affecting the representations? Have you done a sweep and settle on this number because it support your findings better? If so, it would be important to mention it.\\n\\n2. It would have been nice to have analysis to support the following sentence on page 4: Generally, when the network loses its ability to self-localise the agent, state-action values are no longer reliable indicators of future reward potential as the current environmental state is not clearly discernible. Otherwise please correct it.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Unifies prior approaches though not many new insights\", \"review\": \"Motivated by biological considerations, this paper shows that recurrent networks trained with a predictive and goal-based objective on a maze finding task, qualitatively recapitulate experimental findings in hippocampal recordings in rodents trained on the same task. In particular, these LSTM networks demonstrate both metric representations of their environment and nonlocal extrafield firing at decision points along the maze (anticipating the future trajectory of the agent).\", \"strengths\": [\"I like that the authors take a normative approach that exhibits both metric and non-metric place cell representations of the environment, unifying prior findings in one model.\", \"I appreciate that no velocity input is given to their model, in contrast to prior approaches.\", \"I also liked the qualitative comparisons to hippocampal recordings from rodents trained on the same task (especially Figures 6 and 7).\"], \"weaknesses\": [\"The primary conclusion, namely, training an RNN on a maze-like environment gives you place cells, is really not all that new, especially considering that the network is still supervised to predict position and landmarks.\", \"Given that lack of novelty in the modeling conclusion, it would have therefore been nice to have seen more quantitative comparisons to hippocampal recordings in rodents. Does their approach explain more variance in these neurons than prior approaches? Otherwise, it seems that they simply recapitulate prior qualitative comparisons.\"], \"minor_comments\": \"\\u201cRecurrence based\\u201d should be changed everywhere to \\u201crecurrent\\u201d (e.g. on pg. 1), and \\u201cNeuroscience\\u201d is not capitalized. The motivation to use double Q learning should be expanded on in pg. 3 prior to equation 3.\", \"question\": \"The authors mention that Q-learning performs poorly on tasks in dynamic environments \\u2013 however, I do not see any evidence of this in the paper, it would be imperative to show this explicitly for the environments they consider. Suppose this is in fact the case, could you clarify what makes your approach more successful at this task than others? Is it because of the pretraining to predict the subsequent observation of wall colors from the current wall color observations?\\n\\nAs it stands, I think the ideas of this paper are interesting and think it unifies prior approaches, but I do not think the conclusions from the modeling add all that much novel insight from prior approaches. Therefore, I recommend a weak accept.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official review\", \"review\": \"**Paper summary**\\n\\nThe main goal of this paper is to show that LSTM units in a network trained to solve a T-maze task, show similar activity patterns as neurons in rats solving a similar task. Specifically, the authors make the following claims: (1) an RNN learning the task by a combination of reinforcement and predictive learning produces internal representations with consistent extrafield firing associated with consequential decision points, (2) the network\\u2019s representation, once trained, follows a forward sweeping pattern similar to those found in rats and (3) a higher proportion of units in the trained network show strong selectivity for the choice phase of the task than for spatial topology, as seen in rats.\\n\\n**Pros**\\n1. The submission is clear, well-written and the execution is competent. \\n2. I find the approach well motivated and the problem interesting for the current state of the field.\\n3. The authors provide a sufficient amount of details so that reproducibility should be possible.\\n\\n**Cons**\", \"i_have_concerns_about_key_points_of_the_paper_and_the_interpretation_of_the_results\": \"1. One of the main results of the paper is the observation that the network produces a forward-moving representation similar to the one observed in rats at the decision point. However, the way the authors simulate this is by freezing the agent at such point and keeping the LSTM running, repeating the same constant observation. The LSTM was trained on trajectories on the maze, so in this trajectory (which was never used during training), the network is completely out of distribution. The activity of any network in this situation is difficult to interpret. Because the network was trained with a predictive loss on two very specific sequences of observations, it seems plausible that it is robust to the change of input statistics and follows the same sequence, maybe with some instabilities. Note that the cue was present, which explains why the correct sequence is followed. \\nIn the network trained also by RL in particular, the authors interpret this as \\u201cThe agent appears to be sampling the trajectory concerning the alternate return arm of the maze before ultimately settling on the rewarding return arm\\u201d. But this is, in my opinion, an over-interpretation, as the agent has no sampling capability in the first place (there is no generative model of observations), nor any particular planning mechanism. Alternatively, the authors may be claiming that this jumping behavior happens only after the RL training and not before, in which case they should emphasize this difference and quantify it explicitly. Although in this case, a simple explanation for this could be that due to the epsilon-greedy, only during the RL training the network is exposed to the wrong cue-arm combination. Therefore, the LSTM would be less able to rely on the cue, which could explain the jump between attractors in the out-of-distribution case. In the discussion section, the authors claim \\u201cWe demonstrate that extrafield firing activity [..] emerges when a simulated agent [...] pauses at decision points - suggesting intrinsic dynamics are encoding the future planned trajectory of the agent.\\u201d. I find this to be an over-claim, as the agent doesn\\u2019t pause (it can\\u2019t) and doesn\\u2019t plan (for any common definition of planning). I would be more convinced if the agent was able to pause (as an additional action) and this behavior was observed in the LSTM activity in this situation, which is closer to the biological case.\\n\\n2. The pre-train stage is done on trajectories that correspond to the solved task. This means that the LSTM trained by the predictive loss is not exposed to the general structure of the environment but to the specific solution of the task, including the cue-choice association and the exact sequence of observations in each of the two correct trajectories. The authors draw a parallel with the pre-training phase in behavioral experiments (Johnson & Redish, 2007) in which rats usually run each trajectory separately (by having the other one blocked). However, in my opinion, this is problematic for their analysis. First it\\u2019s unclear to what extent the network is learning by RL as during the pre-training it has already learnt to predict the observation corresponding to the correct turn (wich corresponds to one of the two actions). Second, the LSTM is exposed only to the correct cue-arm trajectories, which I think is the reason why the forward-looking sweeps only follow these trajectories (see previous point). This is more similar to a demonstration than to pre-training. A more conventional pre-training would leave the agent to explore freely to implicitly learn the structure of the environment (a la Tollman). On the other hand, the argument of following the protocol of the behavioral experiments also doesn\\u2019t fully work as the rats are still producing motor outputs and even being rewarded during the pre-training phase (Johnson & Redish, 2007).\\n\\n3. Finally, a more general concern is the main point of the paper. If I understand correctly, the main claim is the similarity of the observations between the RNN agent and the experimental findings in rats. However, given that there are plenty of arbitrary choices when training an RNN, I believe the results are not particularly explanatory. I would encourage the authors to formulate better alternative hypothesis and controlled experiments. For example, I would find it interesting to show that the forward-sweeping observations done in rats, which is often interpreted as a signature of planning or prediction of the consequences of future actions, arises simply from a next-step prediction loss in an overtrained rat.\", \"minor_concerns\": [\"\\u201cAs such, a network of Gated Recurrent Units [...] or vanilla RNN units was unable to perform well in either the pre-training or joint RL task due to these prevalent long term dependencies.\\u201d How many steps are there between cue and choice, and between choice and reward?\", \"Related to the previous point: \\u201cWe attempted to run the reinforcement learning task alone in a maze with no wall colours or environment statistics except the cue. In this scenario the network is not able to learn the task due to a lack of self-localisation.\\u201d If I understand correctly, there is a constant number of steps between the cue and the moment where the choice has to be made. I would tend to believe that an LSTM can learn to make a prediction only based on the number of timesteps, regardless of the lack of wall observations (e.g. 2 sequence problem in Hochreiter and Schmidhuber, 1997).\"], \"details\": [\"It would be good to clarify what exactly is the action set of the agent.\", \"I would like to know how exactly are activity maps obtained. The authors mention \\u201cPlace fields determined by contiguous locality with average activity exceeding 30% peak unit activity during a single left trajectory followed by a right trajectory\\u201d but I don\\u2019t find this particularly clear. I also found it difficult to understand the bottom row of Fig 3.\", \"Fig 5 should be referred to in the paragraph starting with \\u201cIn stark contrast to the dynamics of the LSTM network following predictive pre-training...\\u201d\", \"Why is the return to start representation in Fig. 6 different for right and left trajectories?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
tv8n52XbO4p | Learning to Generate Noise for Multi-Attack Robustness | [
"Divyam Madaan",
"Jinwoo Shin",
"Sung Ju Hwang"
] | Adversarial learning has emerged as one of the successful techniques to circumvent the susceptibility of existing methods against adversarial perturbations. However, the majority of existing defense methods are tailored to defend against a single category of adversarial perturbation (e.g. $\ell_\infty$-attack). In safety-critical applications, this makes these methods extraneous as the attacker can adopt diverse adversaries to deceive the system. Moreover, training on multiple perturbations simultaneously significantly increases the computational overhead during training. To address these challenges, we propose a novel meta-learning framework that explicitly learns to generate noise to improve the model's robustness against multiple types of attacks. Its key component is Meta Noise Generator (MNG) that outputs optimal noise to stochastically perturb a given sample, such that it helps lower the error on diverse adversarial perturbations. By utilizing samples generated by MNG, we train a model by enforcing the label consistency across multiple perturbations. We validate the robustness of models trained by our scheme on various datasets and against a wide variety of perturbations, demonstrating that it significantly outperforms the baselines across multiple perturbations with a marginal computational cost. | [
"adversarial learning",
"robust machine learning",
"robust optimization",
"meta learning"
] | Reject | https://openreview.net/pdf?id=tv8n52XbO4p | https://openreview.net/forum?id=tv8n52XbO4p | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"AR_WaCO3VC6",
"Rr3F8mVTL2g",
"OGfsUYi5LXR",
"WY-Ik7ShZzf",
"G8PHbp1CjyE",
"qEUHxU_sDs",
"7LIKLOhO1C5",
"-TDmiQMmM_",
"NIwKWutlIw",
"3IkJRTYfhJC",
"PRvYk0zUK9n",
"bOaOgkJeC3B",
"A9_soy56AEs",
"kQvZBV9ak9c",
"pY59RlzBgMn",
"iEPkMEVZy1q",
"gDOb_AIkk5",
"9eZO2H40XVX",
"ttLeaFPXyz-",
"6l4UvrmqojD",
"LkwiEtCCeSv",
"GBZarAaRqPC",
"JN8SQP5aVg",
"uwcwhu4VVoj",
"uNiWR-V7oXo",
"iF-e5SkNBvh"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040430894,
1606304878828,
1606156936670,
1606144371333,
1606133012700,
1606127612611,
1605985683736,
1605984361272,
1605895339743,
1605889783754,
1605778925126,
1605753623144,
1605730166784,
1605674739363,
1605642709308,
1605623830818,
1605606598063,
1605569943004,
1605126404038,
1605125752892,
1605123652930,
1605122885381,
1603980842430,
1603866848159,
1603859349109,
1603807626690
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3710/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3710/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3710/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3710/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3710/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3710/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3710/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3710/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3710/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3710/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3710/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3710/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3710/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3710/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3710/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3710/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3710/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3710/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3710/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3710/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3710/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3710/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3710/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3710/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3710/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This is a borderline case. The paper seems solid although some of the numbers are likely incorrect because in some results tables in the appendix the error taken over all attacks is higher than for the best individual attack (which should never happen).\\n\\nThe main contribution of this paper is to augment a standard adversarial loss (against attacks from different norms) with a \\u201cconsistency\\u201d term (consistency between clean, adversarial and noise augmented samples). The relatively large jump in robustness compared to existing schemes that do adversarial training against multiple norms is a bit surprising. A possible explanation could be that the additional consistency term smoothes the landscape around the clean samples a little bit, which could help to find better adversarial examples. The latter would be very similar to a paper by Pushmeet and colleagues (https://arxiv.org/pdf/1907.02610.pdf) which is not cited, but definitely should. It might also be worthwhile to compare to this paper. \\n\\nTaken together, this work is interesting but not sufficiently convincing yet to belong to the top papers to be selected for publication at ICLR.\"}",
"{\"title\": \"Summary of Response for R2\", \"comment\": [\"We thank you for your efforts in reviewing our paper, as well as for the insightful and constructive comments. Since the discussion phase will end in a few hours, we provide a summary of our response to your review below:\", \"We updated our notation for the norm-ball $B$ to $B_{\\\\mathcal{A}}$. Further, we explicitly defined a perturbation set $S$ and the attack sampling procedure in Eq. (5) of the revision.\", \"We elaborated Algorithm 1, incorporating your suggestions in the revision.\", \"We added a separate paragraph to highlight the intuition of our proposed framework in the revision.\", \"We clarified the flow of gradients for $\\\\phi$ and $\\\\theta$ in the revision.\", \"We clarified the meta-learning objective for the generator during the rebuttal period.\", \"Finally, we also provided the pre-trained weights to promote reproducibility.\", \"We hope that we have satisfactorily addressed all your comments and suggestions, both in the responses and in the revision. We thank you again for your comments, which helped us significantly improve the quality and clarity of the paper.\", \"Best regards,\", \"Authors\"]}",
"{\"title\": \"Response to R4 (4)\", \"comment\": \"We sincerely appreciate your comments and timely response.\\n> The chaining of gradients was not very clear from the manuscript. Maybe adding a small footnote could help the reader.\\n- Thank you for your suggestion, we have explicitly specified this in the paragraph above the overall objective in the revision.\\n\\n> One of the issues with the equation here is that $\\\\mathcal{B}$ is not clearly defined and thus we do not know which norm to use for $||v||$ . Again, this is a relatively minor comment, but maybe the norm should be specified as $||v||_p$ .\\n- We apologize for this confusion. We have defined and updated the notation of $B$ to $\\\\mathcal{B}\\\\_{\\\\mathcal{A}}$ and $||v||$ to $||v||_{\\\\mathcal{A}}$ in Eq. (1) in the revision.\\n\\n> These are subtle difference that make comparison with other work more difficult.\\n- We are sincerely sorry for the confusion. We have updated the text/caption accordingly in our revision (we use $8/255$ for $\\\\ell_\\\\infty$ for all our training models and evaluations.)\\n\\n> Additional results.\\n- According to your previous suggestions, we experimented with TRADES for ~10 hours of training time (70 epochs instead of 30 used in our current revision). We provide the results in our response. We did not observe any significant differences in our results compared to when trained with 70 epochs. We apologize that we could not do multiple runs for 70 epochs due to the shortage of compute and time.\\n| Dataset \\t| Model \\t| $\\\\ell_\\\\infty$ \\t| $\\\\ell_1$ \\t| $\\\\ell_2$ \\t|\\n|----------\\t|--------------------\\t|:-------------:\\t|:-----------:\\t|:-----------:\\t|\\n| CIFAR-10 \\t| TRADES (30 epochs) \\t| 48.9 += 0.7 \\t| 17.9 +- 0.6 \\t| 69.4 +- 0.3 \\t|\\n| CIFAR-10 \\t| TRADES (70 epochs) \\t| 48.9 \\t| 15.6 \\t| 69.7 \\t|\\n| SVHN \\t| TRADES (30 epochs) \\t| 49.9 +- 1.7 \\t| 1.6 +- 0.3 \\t| 56.0 +- 1.4 \\t|\\n| SVHN \\t| TRADES (70 epochs) \\t| 47.1 \\t| 3.9 \\t| 52.2 \\t|\"}",
"{\"title\": \"Response to R3 (2)\", \"comment\": \"Dear Reviewer 3,\\n\\nCould you please go over our responses and the revision and let us know if there is any other information we should provide since we can have interactions with you only by this Tuesday (24th)? We have responded to your comments and faithfully reflected them in the revision. We sincerely thank you for your time and efforts in reviewing our paper, and your insightful and constructive comments.\\n\\nThanks, Authors\"}",
"{\"title\": \"Thank you for the clarifications\", \"comment\": \"> we have used the standard notation for projected steepest descent\\n\\nOne of the issues with the equation here is that $\\\\mathcal{B}$ is not clearly defined and thus we do not know which norm to use for $||v||$. Again, this is a relatively minor comment, but maybe the norm should be specified as $||v||_p$.\\n\\n> there exists a counter-example for this corresponding notation, for instance, we can sample a norm-ball $\\\\mathcal{B} = \\\\ell_1$, and then project a sampled attack $\\\\ell_2$ on $\\\\mathcal{B} = \\\\ell_1$.\\n\\nI thought the paper only used one specific attack for each $p$-norm where $p = \\\\\\\\{\\\\infty, 1, 2\\\\\\\\}$. I understand that the authors want to keep the notation general, but it is usually harder to follow (especially when it is not needed in the rest of the paper).\\n\\n> we show the results with different perturbation\\n\\nThis is very interesting. Thank you for taking the time to try this.\\n\\n> we want to clarify that 8/255 = 0.031, which is the same as eps=0.03\\n\\n$8/255 = 0.03137\\\\ldots \\\\neq 0.031 \\\\neq 0.03$. These are subtle difference that make comparison with other work more difficult. Please, make sure that the text/captions are clear that these are the perturbation radii used (anyone skimming through the paper will assume 8/255).\\n\\n> We are currently training the TRADES model that uses l-inf for half of the examples and l-1\\n\\nUltimately, these are just suggestions that are triggered by my own curiosity. Do not worry about having such results before the end of the rebuttal period.\"}",
"{\"title\": \"Thank you for the quick reply\", \"comment\": \"> the gradients are chained through the T steps\\n\\nThis was not very clear from the manuscript. Maybe adding a small footnote could help the reader. I also see that I was not the only reviewer confused by this. I am happy to hear that the gradients are propagated through the whole chain.\\n\\n> we want to clarify the notation that RST (Carmon et al.) sees (50k + 500k) * 200 images during training where 50k images are the standard supervised points and 500k unsupervised Tiny Images.\\n\\nThis is really as minor issue, but I do not think the authors' understanding is correct (the supplementary material in Carmon et al. indicates otherwise: \\\"composing every batch from equal parts labeled and unlabeled data\\\"). To be honest, it is only useful to consider RST if the author also use additional unlabeled data.\\n\\n> RST does not generalize to multiple perturbations and thus is not a competitor\\n\\nI agree, although it does get very impressive l-inf and l-2 results.\\n\\n> combined with our method\\n\\nGreat new results.\"}",
"{\"title\": \"Response to R4 (3)\", \"comment\": \"We sincerely appreciate your feedback. We respond to your concerns below:\\n\\n1. Eq (1) in the revised manuscript is not the typical PGD procedure used by Madry et al. In particular, it is unclear how the argmax is solved here. \\n- Please note that the typical PGD procedure used by Madry et al. is limited to $\\\\ell_\\\\infty$ norm where it uses the sign of the gradient. In contrast, we have used the standard notation for projected steepest descent, which is common in many papers in the literature (for, eg. see Eq. (1) in Wong et al. [2], Eq. (3) in Maini et al. [2]). \\n---\\n2. The current definitions seems cyclic, e.g. $\\\\mathcal{A}_\\\\mathcal{B}$ is used in line 5 of Alg. 1 and $\\\\mathcal{B}_\\\\mathcal{A}$ is used elsewhere. It seems to be simpler to sample the norm-balls rather than the attacks notation-wise.\\n- We thank you for your suggestion. However, there exists a counter-example for this corresponding notation, for instance, we can sample a norm-ball $\\\\mathcal{B}=\\\\ell_1$, and then project a sampled attack $\\\\ell_2$ on $\\\\mathcal{B}=\\\\ell_1$. In contrast, we sample an attack $\\\\mathcal{A}$ and associate it\\u2019s corresponding norm-ball to avoid this confusion. We would also like to point out that we use $\\\\mathcal{B}_\\\\mathcal{A}$ in line 5 of Alg. 1 for consistency of our notation.\\n- Additionally, the perturbation set $S$ could generally also consist of non $\\\\ell_p$ perturbations where it is required to only sample an attack $\\\\mathcal{A}$ and the norm-ball $\\\\mathcal{B}$ is not required. \\n---\\n3. It is not clear that gradients are stopped from \\\\theta_{T-1}. Is that correct\\n- We hope that we clarified this point in our previous response (2).\\n---\\n4. Would it make sense to sample the different attacks with different rate (rather than uniformly)?\\n-Thank you for raising this. We show the results with different perturbation sets on CIFAR-10 with WideResNet 28-10 as you suggested before to illustrate this effect. It can be observed the trained model is comparatively less robust on the attack, which was not used in training. We believe that the comparable performance on $\\\\ell_2$ might be due to the similar attack strength across different attacks, and would have a different outcome with a higher epsilon.\\n\\n| Model | $\\\\ell_\\\\infty$ | $\\\\ell_1$ | $\\\\ell_2$ |\\n|------------------------ |:-------------: |:-----------: |:-----------: |\\n| $\\\\ell_\\\\infty + \\\\ell_1$ | 42.0 +- 0.2 | 56.3 +-0.7 | 70.3 +- 0.1 |\\n| $\\\\ell_\\\\infty + \\\\ell_2$ | 45.5 +- 0.6 | 31.8 +- 0.8 | 72.3 +- 0.2 |\\n| $\\\\ell_1 + \\\\ell_2$ | 26.7 +- 0.4 | 57.3 +- 0.7 | 72.6 +- 0.1 |\\n---\\n5. I realize from the caption of Table 1 that l-inf uses eps=0.03, which is much smaller than the usual 8/255 that other work use. I'm really intrigued as to why RST/TRADES perform so poorly. It also seems that l-inf and l-2 perturbations are compatible and only l-1 seems poor when using RST/TRADES. Would it be possible to train a TRADES or RST model that uses l-inf for half of the examples and l-1 for the other half?\\n- First, we want to clarify that 8/255 = 0.031, which is the same as eps=0.03, and we use the same epsilon for adversarial training and evaluation. This can also be verified from the `attack_pgd` function in our attached supplementary code. Second, it is not always true that only $\\\\ell_1$ seems poor when using RST/TRADES (please see the results on SVHN and Tiny-ImageNet in Table 1). We are currently training the TRADES model that uses l-inf for half of the examples and l-1 and we will try our best to add the results before the rebuttal deadline.\\n\\nWe hope that we have clarified your concerns. Please let us know if you have any more concerns or would like us to elaborate on any of the above points. We are happy to provide more clarifications if there is anything still unclear.\\n\\n[1] Wong et al., Wasserstein Adversarial Examples via Projected Sinkhorn Iterations, ICML 19\\n\\n[2] Maini et al., Adversarial Robustness Against the Union of Multiple Perturbation Models, ICML 20\"}",
"{\"title\": \"Response to R4 (2)\", \"comment\": \"We sincerely thank you for responsiveness and timely response to our rebuttal.\\n\\n1. I'm assuming that this could be because the gradients are not propagated through the T steps, but rather just the last one. Have the authors tried back-propagating through more steps?\\n- We would like to clarify that the gradients are chained through the T steps since $\\\\theta^{(T+1)}$ is dependent on $\\\\theta^{(T)}$ that depends on $\\\\theta^{(0)}$, and we use TorchMeta [1] for the double backpropagation.\\n---\\n2. I'm not quite sure if my understanding of RST (from Carmon et al.) matches the authors. I believe they used 200 CIFAR-10 equivalent epochs. Hence the models sees 50,000 * 200 images throughout training (the training batch is split 50% between supervised and unsupervised datapoints). I understand however that running 200 epochs can be prohibitively expensive, but - for fairness - I would also be interested in seeing Adv_inf, TRADES or RST trained for ~10 hours.\\nIn the table, it is stated that RST took 50+ hours. Hence it does seem like the authors ran it for about 300 CIFAR-10 equivalent epochs. Yet, robust accuracy is only 55% (instead of the 59% obtained in the original publication).\\n\\n- First, we want to clarify the notation that RST (Carmon et al.) sees (50k + 500k) * 200 images during training where 50k images are the standard supervised points and 500k unsupervised Tiny Images. Each epoch of RST takes around 3 hours of computation, so we don\\u2019t have the compute available to run it for 200 epochs, and 10 hours of RST is not sufficient for convergence. We are currently training TRADES for 3x training time and will update our response before the rebuttal deadline.\\n- Second, please note that RST does not generalize to multiple perturbations and thus is not a competitor but instead can be combined with our method. We trained MNG-AC + RST after our submission we provide the results for CIFAR-10 in our response below. We can observe that with the same set of hyper-parameters and training steps (30 epochs), MNG-AC + RST significantly improves the performance on both $\\\\ell_1$ and $\\\\ell_2$ attack. We will add these results in our final revision, and we believe that this should further strengthen our paper as it is not feasible to combine RST with the current multi-perturbation training methods due to the significant training cost.\\n\\n| Model \\t | $\\\\ell_\\\\infty$ \\t| $\\\\ell_1$ \\t| $\\\\ell_2$ \\t|\\n|--------------\\t |:-------------:\\t|:-----------:\\t|:-----------:\\t|\\n| RST \\t | 54.9 +- 1.8 \\t| 22.0 +- 0.5 \\t| 73.6 +- 0.1 \\t|\\n| MNG-AC \\t| 42.2 +- 0.9 \\t| 55.0 +- 1.2 \\t| 71.5 +- 0.1 \\t|\\n| MNG-AC + RST \\t| 46.2 +- 1.2 \\t| 62.6 +- 1.4 \\t| 80.9 +- 0.1 \\t|\\n---\\n3. I still don't see how this is the case, the l-1 and l-2 row seem un-centered, while l-inf is centered.\\n- We are incredibly sorry for this confusion. This is due to the discrepancy in the labels of the graphs, and we will fix this in the update of our paper.\\n---\\n4. Please, do consider trying it as it is much less computationally expensive that one might expect.\\n- Please note that the exact meta-learning algorithm itself is not our main contribution; we can also use a bilevel-optimization similar to the MAML framework [2]. However, due to the high computation cost of bilevel-optimization, we adopted an online approximation [3, 4] for computational efficiency. To incorporate your suggestions, we will do our best to implement hyper-gradients, but we are not sure whether it will be possible to include it by the rebuttal deadline.\\n\\n[1] Deleu et al., 2019 Torchmeta: A Meta-Learning library for PyTorch (https://arxiv.org/pdf/1909.06576.pdf)\\n\\n[2] FInn et al., 2017, Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks, ICML 2017\\n\\n[3] Jang et al., 2019 Learning What and Where to Transfer, ICML 2019\\n\\n[4] Shu et al., 2019 Meta-Weight-Net: Learning an Explicit Mapping For Sample Weighting, NeurIPS 2019\"}",
"{\"title\": \"Direct answers\", \"comment\": \"Thank you for the additional results and updates to the manuscript. Here are some direct answers related to the authors' response.\\n\\n> We empirically found that larger values of T do not provide a significant increase in the robustness while leading to a significant increase in the training cost.\\n\\nI'm assuming that this could be because the gradients are not propagated through the T steps, but rather just the last one. Have the authors tried back-propagating through more steps?\\n\\n> It is essential to note that RST uses ~5 million data points for CIFAR-10 [...] it takes more than 24 days to finish RST with 200 epochs.\\n\\nI'm not quite sure if my understanding of RST (from Carmon et al.) matches the authors. I believe they used 200 CIFAR-10 equivalent epochs. Hence the models sees 50,000 * 200 images throughout training (the training batch is split 50% between supervised and unsupervised datapoints). I understand however that running 200 epochs can be prohibitively expensive, but - for fairness - I would also be interested in seeing Adv_inf, TRADES or RST trained for ~10 hours.\\n\\nIn the table, it is stated that RST took 50+ hours. Hence it does seem like the authors ran it for about 300 CIFAR-10 equivalent epochs. Yet, robust accuracy is only 55% (instead of the 59% obtained in the original publication).\\n\\n> The axes are centred for all the $\\\\ell_p$ norms.\\n\\nI still don't see how this is the case, the l-1 and l-2 row seem un-centered, while l-inf is centered.\\n\\n> due to the high computational cost of hypergradients\\n\\nPlease, do consider trying it as it is much less computationally expensive that one might expect.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for your answers. Here are a few more points related to the new manuscript:\\n\\n1. Eq (1) in the revised manuscript is not the typical PGD procedure used by Madry et al. In particular, it is unclear how the argmax is solved here. It would also be good to define $\\\\mathcal{B}$ directly here too.\\n\\n2. The SAT explanation remains unclear. My suggestion would be to introduce the symbol $\\\\mathcal{A}_\\\\mathcal{B}$ that performs PGD using $\\\\mathcal{B}$. The authors can then use $\\\\mathcal{B}_1$, ... The current definitions seems cyclic, e.g. $\\\\mathcal{A}_\\\\mathcal{B}$ is used in line 5 of Alg. 1 and $\\\\mathcal{B}_\\\\mathcal{A}$ is used elsewhere. It seems to be simpler to sample the norm-balls rather than the attacks notation-wise.\\n\\n3. It is not clear that gradients are stopped from $\\\\theta_{T - 1}$. Is that correct?\\n\\n4. Would it make sense to sample the different attacks with different rate (rather than uniformly)?\\n\\n5. Tiny-ImaegeNet -> Tiny-ImageNet\\n\\n6. I realize from the caption of Table 1 that l-inf uses eps=0.03, which is much smaller than the usual 8/255 that other work use. I'm really intrigued as to why RST/TRADES perform so poorly. It also seems that l-inf and l-2 perturbations are compatible and only l-1 seems poor when using RST/TRADES. Would it be possible to train a TRADES or RST model that uses l-inf for half of the examples and l-1 for the other half?\"}",
"{\"title\": \"Response to R2 (3)\", \"comment\": \"We sincerely thank you for responsiveness and timely response to our rebuttal.\\n> I understand now that the gradient with respect to \\\\phi of equation 9 is propagated throughout all the previous steps to x_aug. I would recommend that the authors either make this explicit or update their notation, as the current set of equations as written do not show any dependence on \\\\phi beyond the implicit dependence through x_aug and so this is ambiguous. There is a \\\\phi in the classifier loss of Equation 11, but this is not ideal and somewhat confusing as it's really a direct dependency of x_aug.\\n- We thank you for your suggestions. We have now **explicitly specified the flow of gradients**in the revision (see the paragraph below Eq. (10) in the revision). Further, to explicitly show the dependence of $\\\\phi$ on $x^{aug}$, we have updated our notation from $x^{aug}$ to $x^{aug}\\\\_{\\\\phi}$. \\n- Additionally, we have also updated the notation of $x^{adv}$ to $x^{adv}_{\\\\theta}$ to explicitly show that adversarial examples $x^{adv}$ are generated using the PGD attack on the classifier with **parameters $\\\\theta$**and are independent of the generator.\\n---\\n> Due to this ambiguity, it is also unclear what terms are being backpropagated through in each equation. Since \\\\phi is being backpropagated all the way back to x_aug, what about the other steps for \\\\theta? Are the update steps for \\\\theta also chaining gradients all the way back to \\\\theta_0? Are you using a double backpropagation library here to do this? If these are just single step gradients taken locally with respect to the current iteration of the parameter, then the notation here is inconsistent with \\\\phi and needs to be more carefully presented. \\n- We sincerely apologize for the confusion. As you rightly pointed out, the update steps for $\\\\theta$ **chain back to $\\\\theta^{(0)}$**, since $\\\\theta^{(T+1)}$ is dependent on $\\\\theta^{(T)}$ that depends on $\\\\theta^{(0)}$, and we use TorchMeta [1] for the double backpropagation.\\n\\nWe hope that we have clarified your concerns. Please let us know if you have any more concerns or would like us to elaborate on any of the above points. We are happy to provide more clarifications if there is anything still unclear.\\n\\n[1] Deleu et al., 2019 Torchmeta: A Meta-Learning library for PyTorch (https://arxiv.org/pdf/1909.06576.pdf)\"}",
"{\"title\": \"Thanks for the answers; but how far back do gradients go for all steps?\", \"comment\": \"I thank the authors for clarifying my questions. I now have a better understanding of the proposed approach, though this brings up a few additional questions which are not quite clear yet.\\n\\nI understand now that the gradient with respect to \\\\phi of equation 9 is propagated throughout all the previous steps to x_aug. I would recommend that the authors either make this explicit or update their notation, as the current set of equations as written do not show any dependence on \\\\phi beyond the implicit dependence through x_aug and so this is ambiguous. There is a \\\\phi in the classifier loss of Equation 11, but this is not ideal and somewhat confusing as it's really a direct dependency of x_aug. \\n\\nDue to this ambiguity, it is also unclear what terms are being backpropagated through in each equation. Since \\\\phi is being backpropagated all the way back to x_aug, what about the other steps for \\\\theta? Are the update steps for \\\\theta also chaining gradients all the way back to \\\\theta_0? Are you using a double backpropagation library here to do this? If these are just single step gradients taken locally with respect to the current iteration of the parameter, then the notation here is inconsistent with \\\\phi and needs to be more carefully presented.\"}",
"{\"title\": \"Author Response\", \"comment\": \"Dear Reviewers,\\n\\nWe sincerely appreciate your time and effort to review our paper. Since the first phase of the response period has ended, if you have time could you please indicate if there are any other concerns of yours which we have not addressed, we would be pleased to clarify those points and further strengthen our paper. \\n\\nThank you very much\"}",
"{\"title\": \"Response to R1 (2)\", \"comment\": \"Thank you so much for your quick and timely response to our rebuttal.\\n\\n1. In the response, you wrote \\\"MNG and SAT do not optimize over the worst-case scenarios.\\\" I believe you mean \\\"MNG does not optimize over the worst-case scenarios\\\". My understanding is that SAT is nothing but a stochastic version of full-batch worst-case optimization.\\n- We sincerely apologize for the confusion. We want to clarify that \\\"worst-case optimization\\\" in multi-perturbation training implies **optimizing over the strongest perturbation**from the perturbation set, that is, optimizing over the attack that leads to the maximum loss. However, it requires the computation of all the attacks in the perturbation set, which significantly increases the computational cost. In contrast, as you pointed out, the SAT is a stochastic version of the full-batch worst-case optimization, which **prevents overfitting**on a single perturbation set by stochastically sampling an attack from a given perturbation set and achieves comparable performance to multi-perturbation training with **significant lower computation cost.**\\n- Further, MNG meta-learns the noise distribution to **minimize the adversarial loss on the sampled attack**and not over the worst-case scenarios, as worst-case optimization does not necessarily lead to an optimal solution (see Adv$_{max}$ results on SVHN dataset in Table 1).\"}",
"{\"title\": \"My review concerns are well addressed\", \"comment\": \"I thank the authors for providing the clarification and for revising the submission to address my review comments. I have no additional questions.\", \"one_minor_follow_up_discussion\": \"In the response, you wrote \\\"MNG and SAT do not optimize over the worst-case scenarios.\\\" I believe you mean \\\"MNG does not optimize over the worst-case scenarios\\\". My understanding is that SAT is nothing but a stochastic version of full-batch worst-case optimization.\"}",
"{\"title\": \"Summary of updates in the initial revision\", \"comment\": \"We thank all reviewers for their time and efforts in reviewing our paper. We also thank the reviewers for their insightful comments and constructive suggestions, which helped us to further strengthen our paper. We appreciate the positive comments from the reviewers. R1 and R3 find the work novel, R1 mentions that our work is convincing and important, and R3, R4 highlight that the paper is well-written. Moreover, all reviewers appreciate that we provide a comprehensive set of experiments. Here we briefly mention what has been updated in the revision. **We have highlighted the updates in blue (Please see the revised version of the paper).** For more detailed explanations, please refer to the response to each reviewer.\\n\\n1. **Explicit definition of AC loss:** In Section 4, based on R1's comment, we have explicitly defined the Adversarial Consistency (AC) loss in Eq. (7) of the revision.\\n\\n2. **Clarification on the set of attacks and sampling distribution:** We incorporated the suggestions by R2, and explicitly defined our sampling procedure in Eq. (5) and fixed the notations for the set of attacks. Further, we elaborated Algorithm 1 and clarified the notation for the norm-ball of the attack procedure (in Algorithm 1 and SAT).\\n\\n3. **Related work on generative models for adversarial robustness:** In Section 2, based on R4's comment, we included discussions on various works utilizing generative models for adversarial robustness.\\n\\n4. **Intuition for the work:** We have elaborated the intuition of our learning scheme in a separate paragraph in Section 4 in the revision.\\n\\n5. **Minor fixes:** We have updated the caption of Table 1, and the typo pointed out by R4.\"}",
"{\"title\": \"Response to R2 (2)\", \"comment\": \"We thank you for your clarifying questions and suggestions.\\n\\nRe 1) I re-read the revision and it still wasn't clear in the text. I would recommend adding these details to the Algorithm box as the existing references to Equation 4 in line 3 is ambiguous. Not only does this make the norm ball unclear, but also Eq. 4 is actually the full robust optimization problem and not only the adversarial examples generation problem. \\n\\n- We sincerely apologize for the confusion. We have highlighted all our changes in blue, in the revision. According to your suggestions, **we have updated Algorithm 1**. In particular, we have **explicitly defined the attack generation**in Eq. (1) and we refer to this equation for the generation of adversarial examples in Algorithm 1 using the sampled attack. We have also updated the notation of norm-ball $\\\\mathcal{B}(x,\\\\varepsilon)$ to $\\\\mathcal{B}_{\\\\mathcal{A}}(x,\\\\varepsilon)$ to represent the norm ball for a specific attack type $\\\\mathcal{A}$. \\n---\\n\\nRe 2) I understand that the distribution is picking from Lp bounded perturbations, which tells me that the support of the distribution is the set of Lp balls. However, I still have no idea what the actual distribution is from the response, and still could not find this information in the revision. \\n\\n- We apologize for the confusion. We have updated the notations and **explicitly defined a perturbation set**$S$ and the **attack sampling procedure**in Eq. (5) of the revision. We have further clarified and stated this in the Algorithm 1 of the revision as well.\\n\\n---\\n\\nRe 3.1) I understand at this point that the intention of the authors in the response was to convey that the adversarial examples and the generator do not depend on each other. \\n\\n- We apologize for the misunderstanding. We want to clarify that **our intention was not to convey that**adversarial examples and the generator does not depend on each other. The adversarial examples do not depend on the generator, as the adversarial examples are simply generated using the PGD attack (see Eq. (1) in the revision), but the generator does depend on the adversarial examples as it generates a sample to minimize adversarial classification loss across multiple perturbations (see Eq. (10) in the revision), via meta-learning.\\n\\n- Please note that while the meta-generator generates a sample that minimizes the adversarial classification loss across multiple perturbations, it is **not necessarily an adversarial example**. The generated sample simply needs to be effective in minimizing the adversarial classification loss and enforcing label consistency across samples from multiple attacks, and clean samples. Consequently, it pushes the decision boundary (see Figure 4) and enforces a smooth and robust network across multiple perturbations.\\n\\n--- \\n\\nRe 3.2) Equation (9) is minimizing the generator parameters with respect to the classification loss evaluated on the adversarial examples, which seems to be in direct contradiction to the claim \\\"we do not train the generator to minimize the classifier loss\\\". \\n\\n- Please note that the classifier loss or the **classification loss denotes the loss on clean examples $(\\\\mathcal{L}_{cls}(\\\\theta \\\\mid x^{clean},y))$** and the adversarial classifier loss or the **adversarial classification loss denotes the loss on adversarial examples generated by our sampling procedure $(\\\\mathcal{L}_{cls}(\\\\theta \\\\mid x^{adv},y))$.**\\n\\n- Thus, our claim that \\\"we do not train the generator to minimize the classifier loss\\\" implies that **we do not train the generator to minimize the loss on clean examples.** Instead, the generator **explicitly learns an optimal noise distribution to minimize the loss across multiple adversarial perturbations** (as denoted by Eq. (9) in previous revision or Eq. (10) in the current revision and our previous comment). \\n\\n---\\n\\nRe 3.3) Since the adversarial examples do not depend on the generator, I don't see how there can be a gradient with respect to the generator parameters \\\\phi here, since the classification loss of the adversarial examples has nothing to do with the generator.\\n\\n- As mentioned in Re 3.1), the adversarial examples do not depend on the generator as the adversarial examples are generated using the PGD attack. Still, the generator depends on the adversarial examples sampled from the perturbation set $S$. Consequently, $\\\\phi$ in Eq. (10) is dependent on $\\\\theta^{(T+1)}$ that depends on $\\\\theta^{(T)}$ (see Eq. (9)) which in turn depends on $x^{\\\\rm{aug}}$ (see Eq. (8)) and acts as a path for the flow of gradients.\\n\\n---\\n\\nRe 3.4) Since this is such a critical misunderstanding, some further revision here is likely needed (i.e. perhaps this has something to do with the total loss, which is curiously not used at all in any of the steps of the algorithm). \\n\\n- The step 6 of the Algorithm 1 in the revision (step 5 in the previous version of the paper) updates $\\\\theta$ to minimize the total loss. We have explicitly mentioned this in the main text.\"}",
"{\"title\": \"Suggestions and unanswered questions\", \"comment\": \"Thanks for the response! Based on the response, I have a couple suggestions and some outstanding questions that weren't answered in the response, the latter of which I bring up again below.\\n\\nRe 1) Thank you for clarifying this. I re-read the revision and it still wasn't clear in the text (if there was a part which made this explicit please point it to me as I may have missed it). Since there is extra space, I would recommend adding these details to the Algorithm box as the existing references to Equation 4 in line 3 is ambiguous. Not only does this make the norm ball unclear, but also Eq. 4 is actually the full robust optimization problem and not only the adversarial examples generation problem. \\n\\nRe 2) I understand that the distribution is picking from Lp bounded perturbations, which tells me that the support of the distribution is the set of Lp balls. However, I still have no idea what the actual distribution is from the response, and still could not find this information in the revision. \\n\\nRe 3+5) There seems to have been a mistake here in the equation I was referring to. I am referring to the update step for the generator, which is now equation (9) in the revision. I understand at this point that the intention of the authors in the response was to convey that the adversarial examples and the generator do not depend on each other. But Equation (9) is minimizing the *generator* parameters with respect to the classification loss evaluated on the *adversarial examples*, which seems to be in direct contradiction to the claim \\\"we do not train the generator to minimize the classifier loss\\\". Since the adversarial examples do not depend on the generator, I don't see how there can be a gradient with respect to the generator parameters \\\\phi here, since the classification loss of the adversarial examples has nothing to do with the generator. \\n\\nSince this is such a critical misunderstanding, some further revision here is likely needed (i.e. perhaps this has something to do with the total loss, which is curiously not used at all in any of the steps of the algorithm).\"}",
"{\"title\": \"Response to R4\", \"comment\": \"We sincerely appreciate your constructive comments. We respond to your main concerns below:\\n\\n1. I'd recommend making a review of generative models to build adversarial examples, even if they are orthogonal to the one proposed in this paper (e.g., [1,2,3])\\n\\n- Thank you for the helpful suggestion. We have provided a detailed review of generative models for adversarial robustness in the revision.\\n---\\n2. In Eq. (6), what is $\\\\mathcal{B}(x, \\\\epsilon)$. Since there are multiple threat models, I am assuming that it is selected at random between l_1, l_2 and l_inf (like SAT).\\n\\n- As you rightly mentioned, $\\\\mathcal{B}(x, \\\\varepsilon)$ refers to a **random norm-ball (like SAT)**. That is, if the sampled attack is an $\\\\ell_2$ attack, then $\\\\mathcal{B}$ denotes the $\\\\ell_2$ norm ball, and if the sampled attack is an $\\\\ell_1$ attack, then $\\\\mathcal{B}$ is the $\\\\ell_1$ norm ball. Still, it is important to note that $\\\\mathcal{B}(x, \\\\varepsilon)$ is the norm-ball of the attack sampled by Stochastic Adversarial Training (SAT), as MNG learns the noise to minimize the adversarial loss, it is essential to project the generated noise on the same norm-ball. We have clarified this in the revision. \\n---\\n3. The number of inner steps T seems to be critical. However, I don't see any study on this in the paper. It is not clear which value was used for the experiments.\\n\\n- We used $T=2$ for all our experiments to keep the training cost minimum. We empirically found that larger values of T do not provide a significant increase in the robustness while leading to a significant increase in the training cost. We provide a comparison with different values of $T$ below:\\n| Model \\t| $\\\\ell_\\\\infty$ \\t| $\\\\ell_1$ \\t| $\\\\ell_2$ \\t| Time (h) \\t|\\n|---------\\t|---------------\\t|-----------\\t|-----------\\t|----------\\t|\\n| $T = 1$ \\t| 41.5+-0.8 \\t| 55.1+-0.9 \\t| 71.8+-0.2 \\t| 9.4 \\t|\\n| $T = 2$ \\t| 42.2+-0.9 \\t| 55.0+-1.2 \\t| 71.5+-0.1 \\t| 11.2 \\t|\\n| $T = 4$ \\t| 42.4+-0.8 \\t| 55.6+-1.1 \\t| 71.0+-0.2 \\t| 14.6 \\t|\\n| $T = 8$ \\t| 42.6+-1.0 \\t| 55.3+-1.2 \\t| 71.0+-0.1 \\t| 18.9 \\t|\\n\\n---\\n4. The experiments are run using 30 epochs which is rather on slim side. E.g., RST_inf should reach about 59% robust accuracy with 200 epochs of training (with 30 epochs it only reaches 55%). I'm curious as to whether the comparison with the proposed approach is unfair (e.g., Adv_inf sees a single adv example per batch, whereas MNG-AC sees 2).\\n\\n- It is essential to note that RST uses ~5 million data points for CIFAR-10 and SVHN, and it took us 4 days with four GeForce RTX 2080Ti to train with 30 epochs. Since it takes **more than 24 days**to finish RST with 200 epochs, we did not evaluate it. Furthermore, we would like to clarify that MNG-AC does not see 2 examples per batch, the lookahead in Equation 8. occurs with a meta-model, and the classifier update occurs only once. (Please see Line 393 in train_MNG.py in our code for more details).\\n---\\n5. It's not entirely clear to me why beta negatively affects l_2 robustness. In general, it would be interesting to see what MNG-AC does if different subsets of threats are used.\\n\\n- We would like to clarify that it is not a general statement that beta negatively affects $\\\\ell_2$ robustness, instead $\\\\beta$ controls the trade-off between multiple perturbations. We will do our best to get the results for different subsets of threats by the end of the rebuttal deadline.\\n---\\n6. The l_2 loss landscapes seem more noisy that what they should be. Also it's unclear why the axes are centred for l_inf and not for l_2 (explain how these are generated).\\n\\n- We apologize for the confusion. The axes are centred for all the $\\\\ell_p$ norms; we will further clean the plots in the revision. To generate these plots, we vary the input along a linear space defined by the $\\\\ell_p$ norm of the gradient where x and y-axes represent the perturbation added in each direction, and the z-axis represents the loss.\\n---\\n7. In Table 5, MNG-AC achieves 35.1% against all l_inf attacks, but only 33.7% against AutoAttack. Am I missing something?\\n\\n- Thank you for pointing this out. We have fixed this discrepancy in the revision.\\n---\\n8. Concerning Eq. (7), as a curiosity, have authors considered implicit differentiation [4]?\\n\\n- Thank you for suggesting relevant work. However, due to the **high computational cost of hypergradients,** we did not evaluate this direction of work. We will add a reference to this work in the final draft, and a comparison with it should be an interesting problem for future work.\\n---\\n9. The caption could be expanded to include epsilon values. B) Visuaization -> Visualization\\n- Thank you for the suggestions, we have updated the caption and typo in the revision.\\n\\nAdditionally, to promote reproducibility of our work, we provide the pre-trained model weights here: https://drive.google.com/file/d/1kVfOZ2CrhSzgzlS6gK4AntNZhIUosvfz/view?usp=sharing\"}",
"{\"title\": \"Response to R3\", \"comment\": \"We sincerely appreciate your constructive comments. We respond to your main concerns below:\\n\\n1. I was missing an intuitive description of why the adversarial noise should improve robustness to adversarial attacks. I was only aware of it as a method to improve corruption robustness.\\n\\n- It is important to note that simply adding noise **would not improve the robustness to adversarial attacks** (see our comparison with A(dversarial)NG below), MNG **explicitly learns an optimal noise distribution**to prevent overfitting and to promote the generalization across multiple perturbations. Additionally, adversarial noise acts as a noise regularization technique, which is a common technique to improve the generalization in deep neural networks. Further, we have added a separate paragraph to highlight the illustration of our training scheme in Section 4 of the revision of our paper.\\n---\\n2. Is there a difference between the M(eta)NG and the A(dversarial)NG from Rusak et al. 2020?\\n\\n- It is important to note that A(dversarial)NG (Rusak et al. 20) learns the noise projected on $\\\\ell_2$ norm-ball to confuse the classifier, in contrast maximally, we **meta-learn the noise distribution to compliment the generalization across multiple $\\\\ell_p$ perturbations.** Further, we show that **A(dversarial)NG fails to defend against multiple adversarial perturbations**below, which demonstrates the efficiency of M(eta)NG over A(dversarial)NG:\\n\\n| Dataset \\t| Model \\t| Acc$_{\\\\rm clean}$ \\t| $\\\\ell_\\\\infty$ \\t| $\\\\ell_1$ \\t| $\\\\ell_2$ \\t|\\n|---------------\\t|--------\\t|:-----------------:\\t|:-------------:\\t|:---------:\\t|:---------:\\t|\\n| CIFAR-10 \\t| MNG-AC \\t| 81.5+-0.3 \\t| 42.2+-0.9 \\t| 55.0+-1.2 \\t| 71.5+-0.1 \\t|\\n| CIFAR-10 \\t| ANG \\t| 94.6+-0.0 \\t| 0.1+-0.00 \\t| 0.1+-0.0 \\t| 2.9+-0.9 \\t|\\n| SVHN \\t| MNG-AC \\t| 93.7+-0.1 \\t| 35.1+-1.9 \\t| 47.4+-2.2 \\t| 77.6+-1.0 \\t|\\n| SVHN \\t| ANG \\t| 96.8+-0.1 \\t| 0.2+-0.0 \\t| 7.3+-0.5 \\t| 33.9+-1.5 \\t|\\n| Tiny-ImageNet \\t| MNG-AC \\t| 53.1+-0.3 \\t| 27.4+-0.7 \\t| 39.6+-0.7 \\t| 44.8+-0.1 \\t|\\n| Tiny-ImageNet \\t| ANG \\t| 62.8+-0.3 \\t| 0.2+-0.1 \\t| 3.4+-0.4 \\t| 13.4+-0.6 \\t|\\n\\n---\\n3. Why the MNG was trained the way it is was a bit unclear for me.\\n- Our objective was to learn optimal noise distribution that could explicitly minimize the loss of multiple adversarial perturbations and promote label consistency across multiple perturbations. A standard approach was to use a bilevel optimization to train the adversarial classifier with MNG. However, bilevel optimization for adversarial training was **computationally costly.** As a result, we adopted an alternative scheme where we first update the model parameters on the augmented samples for $T$ steps, to explicitly increase the influence of the augmented samples. Then we perform a one-step lookahead to model the adaptaion of the adversarial classifier in the presence of augmented examples. Lastly, after receiving the feedback from the classifier, we update $\\\\phi$ to explicitly minimize the adversarial loss to promote the adversarial robustness of the classifier in the next step.\\n\\n4. I don't work with adversarial examples I am not at all confident in that assessment. From the discussions with people who work on adversarial examples new defenses are usually broken very quickly and there is a number of papers which break numerous defenses. The method is however based on adversarial training which to my knowledge is the only robust method so far and the used attacks seem valid. So I am definitely leaning towards accept but the opinion of a real expert would be highly appreciated as I feel not at all qualified to assess the validity of papers on adversarial examples.\\n\\n- We understand your concern, and we would like to highlight that we have evaluated our proposed method on all the state-of-the-art attacks that exist in the literature. We believe that our evaluation can be a firm guideline when other researchers pursue the evaluation of defenses that are robust against multiple perturbations in the future. Further, as you rightly mentioned our defense is based on adversarial training, which is the only robust method that has withstood the stronger set of attacks.\"}",
"{\"title\": \"Response to R2\", \"comment\": \"We sincerely appreciate your constructive comments. We respond to your main concerns below:\\n\\n1. Augmented examples (x_aug) are generated by adding noise from the MNG and projecting it onto some ball B. It is not clear to me what ball this is since the authors are considering multiple perturbations. Is it a random type? Or a joint projection? I assume it is at least one of the perturbations being considered, or is that incorrect? \\n\\n- $\\\\mathcal{B}(x, \\\\varepsilon)$ refers to the **norm-ball of the specific attack**sampled by Stochastic Adversarial Training (SAT). That is, if the sampled attack is an $\\\\ell_2$ attack, then $\\\\mathcal{B}$ denotes the $\\\\ell_2$ norm ball, and if the sampled attack is an $\\\\ell_1$ attack, then $\\\\mathcal{B}$ is the $\\\\ell_1$ norm ball. As MNG learns the noise to minimize the adversarial loss, it is essential to project the generated noise on the same norm-ball. We have clarified this point in the revision.\\n---\\n2. Similarly, in the algorithm, the authors generate adversarial examples (x_adv) by sampling a random attack. I could not find what set of attacks were being sampled from, or what the sampling distribution is (I checked the appendix as well). \\n\\n- We apologize for the confusion. In this work, the sampling distribution corresponds to the **$\\\\ell_p$-bounded perturbations.** Still, it is important to note that unlike the average and max strategy, MNG + SAT can be applied to any distribution of attacks with a constant cost. We have clarified this point in the revision.\\n---\\n3. The generator is apparently updated to minimize the classifier loss on the adversarial examples as written in Equation (8). However, the adversarial examples are generated from some unspecified set of attacks, which implies that the set of attacks actually depends on the generator somehow. Is this supposed to be the classifier loss on the augmented samples? If not, then how do the adversarial examples depend on the generator? \\n\\n- This is a critical misunderstanding. **Adversarial examples do not depend on the generator;** instead, the one-step update in Eq. (8) is essential to do a lookahead for adapting the model parameters in the presence of the noise-augmented samples. Note that augmented samples are different from the adversarial examples (please refer Figure 1) and our contribution is to optimally generate augmented examples to improve the robustness against multiple perturbations explicitly.\\n---\\n4. The consistency loss involves clean, adversarial, and augmented posterior distributions. There are no details on these distributions: are these simply the softmax of the logits? Or is a generative model that outputs a distribution being used? \\n\\n- As you mentioned, the distributions are the softmax of the logits of the clean, adversarial and augmented samples where the augmented samples are the output of the Meta Noise Generator (MNG). We have incorporated this point in the revision.\\n---\\n5. What is the motivation behind training the generator to minimize the classifier loss? Why would we want to do this over random sampling? What's to prevent a degenerate solution of simply learning to produce a zero perturbation (and thus always producing clean examples, which can achieve low loss)? \\n\\n- Firstly, we would like to clarify that we **do not train the generator to minimize the classifier loss;** instead, the generator learns an optimal noise distribution in a meta-learning training scheme to **minimize the adversarial classification loss**where adversarial classification loss is the loss on the sampled attack from the distribution of attacks.\\n\\n- Secondly, the motivation behind training the generator to minimize this objective is to explicitly learn the noise distribution essential for generalization across multiple perturbations, that might not necessarily correspond to any of the attack perturbations. Furthermore, our algorithm to improve the generalization across multiple perturbations is also motivated by the popular phenomenon of noise regularization being a common technique to improve the generalization performance of deep neural networks. In contrast, even though random sampling helps in generalization, it leads to a suboptimal solution (see Table 2).\\n\\n- Lastly, our meta-learning training scheme prevents the degenerate solution, as producing clean examples would not result in a lower loss on multiple adversarial perturbations.\\n---\\n6. I have checked the supplementary material, and the authors have included the code for running their experiments. Ideally, this would also include pre-trained model weights. \\n\\n- Thank you for pointing this out. Due to the size limit of the supplementary material, we could not provide the pre-trained models. We provide the pre-trained model weights here: https://drive.google.com/file/d/1kVfOZ2CrhSzgzlS6gK4AntNZhIUosvfz/view?usp=sharing\"}",
"{\"title\": \"Response to R1\", \"comment\": \"We sincerely appreciate your constructive comments. We respond to your main concerns below:\\n\\n1) The adversarial consistency (AC) loss is never defined explicitly. \\n- We apologize for the confusion. We have updated the revision with the explicit definition of the Adversarial Consistency (AC) loss in Equation 6.\\n---\\n2) Although the results show improved multi-attack robustness, it will be great if the authors can add more intuition on why the proposed training method leads to performance improvement. Based on the ablation study, it seems that the role of SAT and MNG is to reduce overfitting in robustness to encourage generalization, rather than optimization over the worst-case scenarios.\\n\\n- As you mentioned, SAT and MNG indeed play a critical role to reduce overfitting in robustness to encourage generalization. Intuitively, MNG acts as a **noise regularization technique,** and SAT promotes generalization across multiple perturbations **due to its stochasticity.** Further, we have added a separate paragraph to highlight the illustration of our training scheme in Section 4 of the revision of our paper.\\n\\n- Additionally, we would like to clarify that unlike the max strategy, MNG and SAT **do not optimize over the worst-case scenarios.** MNG learns an **input-dependent optimal noise distribution to lower adversarial error across all the perturbations**that does not necessarily correspond to any of the attack perturbations.\\n---\\n3) The considered multi-attack setting is still limited to different Lp norm perturbation constraints. Although the authors showed improved robustness over unforeseen attacks, the authors should also discuss how the proposed method can generalize to different attacks beyond Lp norms.\\n\\n- We agree that the evaluation of attacks beyond $\\\\ell_p$ norms is interesting, and we would like to point out that the unforeseen adversaries consist of Elastic attack and JPEG attacks which **do not belong to the standard family of $\\\\ell_p$ attacks.**\"}",
"{\"title\": \"Initial review\", \"review\": \"In this paper, the authors propose a novel meta-learning framework that explicitly learns to generate noise to improve model robustness (against multiple types of attacks). The results indicate that the proposed approach improves on the state-of-the-art.\\n\\nOverall, the paper is well written. However some details are missing and this could make the paper hard to reproduce. The experiments could be expanded.\\n\\n1) There is a significant amount of work about using generative models to build adversarial examples. The literature review only focuses on classical adversarial robustness and robustness against multiple adversaries. I'd recommend making a review of these approaches, even if they are orthogonal to the one proposed in this paper (e.g., [1,2,3])\\n2) In Eq. (6), what is \\\\mathcal{B}(x, \\\\epsilon). Since there is multiple threat models, I am assuming that it is selected at random between l_1, l_2 and l_inf (like SAT).\\n3) The number of inner steps T seems to be critical (as it will trade-off gradient precision with compute). However, I don't see any study on this in the paper. Also, it is not clear which value was used for the experiments.\\n4) Looking at Eq. (7), it seems like backpropagation through the T inner steps is necessary to compute the gradients w.r.t. \\\\phi. This seems overly expensive and I find surprising that adv_avg and adv_max take so much longer to train.\\n5) Concerning Eq. (7), as a curiousity, have authors considered implicit differentiation [4] ?\\n6) The experiments are run using 30 epochs which is rather on slim side. E.g., RST_inf should reach about 59% robust accuracy with 200 epochs of training (with 30 epochs it only reaches 55%). I'm curious as to whether the comparison with the proposed approach is unfair (e.g., Adv_inf sees a single adv example per batch, whereas MNG-AC sees 2).\\n7) It's not entirely clear to me why beta negatively affects l_2 robustness. I'd assume that if the model was only trained against l_2, then there might be an optimal value for beta that is different that the one from Fig. 2. In general, it would be interesting to see on MNG-AC does if different subsets of threats are used.\\n8) The l_2 loss landscapes seem more noisy that what they should be. Also it's unclear why the axes are centered for l_inf and not for l_2 (explain how these are generated).\\n9) In Table 5, MNG-AC achieves 35.1% against all l_inf attacks, but only 33.7% against AutoAttack. Am I missing something?\", \"details\": \"A) It would helpful to the reader to have the epsilon values written on top of the different tables. The captions could be expanded to include more details.\\nB) Visuaization -> Visualization\\n\\n[1] https://openreview.net/pdf?id=SJeQEp4YDH: GAT: Generative Adversarial Training for Adversarial Example Detection and Robust Classification\\n[2] https://arxiv.org/pdf/1801.02610: Generating Adversarial Examples with Adversarial Networks\\n[3] https://arxiv.org/pdf/1710.10766: PixelDefend: Leveraging Generative Models to Understand and Defend against Adversarial Examples\\n[4] https://arxiv.org/pdf/1911.02590: Optimizing Millions of Hyperparameters by Implicit Differentiation\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Promising results, but method is not clear\", \"review\": \"Summary\\n=======\\nThe authors propose a number of techniques to learn models which are adversarially robust to multiple perturbations. These involve a noise generator, a loss to enforce consistency, as well as a stochastic variant of adversarial training. With these changes, they are able to produce improvements to robust accuracy to multiple perturbation types. \\n\\n\\nOverall, I get the idea and the empirical results seem promising. However, the structure and writing of the paper is at times rather confusing, and there are a lot of missing details. If the code were not supplied, it would be difficult in the current state to reproduce the method from the paper. Perhaps due to this, the specifics of the key component, the meta noise generator, are still rather opaque to me. Perhaps the authors can clarify, and I am happy to follow up afterwards. \\n\\nComments for discussion\\n=======================\\nThe majority of my confusion lies in section 4, for the specifics of the meta noise generator and parts of the algorithm in general. I am otherwise well acquainted with the relevant literature. \\n\\n1) Augmented examples (x_aug) are generated by adding noise from the MNG and projecting it onto some ball B. It is not clear to me what ball this is since the authors are considering multiple perturbations. Is it a random type? Or a joint projection? I assume it is at least one of the perturbations being considered, or is that incorrect? \\n\\n2) Similarly, in the algorithm, the authors generate adversarial examples (x_adv) by sampling a random attack. I could not find what set of attacks were being sampled from, or what the sampling distribution is (I checked the appendix as well). \\n\\n3) The generator is apparently updated to minimize the classifier loss on the adversarial examples as written in Equation (8). However, the adversarial examples are generated from some unspecified set of attacks, which implies that the set of attacks actually depends on the generator somehow. Is this supposed to be the classifier loss on the augmented samples? If not, then how do the adversarial examples depend on the generator? \\n\\n4) The consistency loss involves clean, adversarial, and augmented posterior distributions. There are no details on these distributions: are these simply the softmax of the logits? Or is a generative model that outputs a distribution being used? \\n\\n5) On a more fundamental level, what is the motivation behind training the generator to minimize the classifier loss? Why would we want to do this over random sampling? What's to prevent a degenerate solution of simply learning to produce a zero perturbation (and thus always producing clean examples, which can achieve low loss)? \\n\\n\\nMinor comments\\n==============\\nI have checked the supplementary material and the authors have included the code for running their experiments. Ideally, this would also include pre-trained model weights. \\n\\nUpdate\\n======\\nAfter much effort, I can say that I understand the paper. The edits appear to have incorporated all the identified missing information. I have thus updated my confidence and slightly improved my score, however I am not confident that the current presentation of the approach will be understandable by a reader without contacting the authors, given that the difficulty I had in understanding the paper (and my initial confidence) stemmed primarily from missing information and poor presentation for the approach. Although the results do seem to improve upon past work, its impact will suffer if it is difficult to understand for a non-reviewer reader. I would be more confident if a fresh set of eyes could understand the details of the work without having to go to the authors to clarify so many details.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Thorough Multi-attack Robustness Evaluation and Clever Adversarial Training\", \"review\": \"This paper addresses a timely issue in adversarial robustness - efficient training of robust models against multiple adversarial perturbations. The authors propose a combination of three techniques: stochastic adversarial training (SAT), meta noise generator (MNG), and adversarial consistency (AC) loss for efficient training, and evaluate the robustness using multiple L1, L2, and Linf norm-bounded attacks and three datasets (CIFAR-10, SVHN, and Tiny Imagenet). The results show improved multi-attack robustness over several baselines (including single-attack and multiple-attack models) and reduced training time. Ablation studies are also performed to illustrate the utility of each component of the proposed model. Overall, this paper provides very detailed evaluations involving multiple datasets, attacks, baselines, and robustness metrics. I find the results convincing and important, and also find sufficient novelty in the proposed training method.\\n\\nThe strengths (S) and weaknesses (W) of this submission are summarized below.\\n\\nS1. The proposal of MNG and AC is effective and novel.\\nS2. The evaluation is thorough and convincing.\\nS3. The proposal improves both robustness and training efficiency in most cases.\\n\\nW1. The adversarial consistency (AC) loss is never defined explicitly. Based on equation (5), it is hard to understand how AC \\\"represents the Jensen-Shannon Divergence (JSD) among the posterior distributions\\\" when considering three distributions, P_clean, P_adv, and P_aug. More clarification is needed.\\n\\nW2. Although the results show improved multi-attack robustness, it will be great if the authors can add more intuition on why the proposed training method leads to performance improvement. Based on the ablation study, it seems that the role of SAT and MNG is to reduce overfitting in robustness to encourage generalization, rather than optimization over the worst-case scenarios.\\n\\nW3. The considered multi-attack setting is still limited to different Lp norm perturbation constraints. Although the authors showed improved robustness over unforeseen attacks, the authors should also discuss how the proposed method can generalize to different attacks beyond Lp norms.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review\", \"review\": \"1. Summary\\n\\nThe authors propose a new method to improve robustness to adversarial examples under various norms (L1, L2 and LInf). Their method combines adversarial training with an adversarial noise generator. They improve upon adversarial training in a multi norm setting by choosing one norm at random for each sample, instead of computing an adversarial for all norms, thus significantly reducing the training time. They additionally improve robustness by regularizing model features between the standard image, the adversarially perturbed image and a perturbation of the image created with an adversarial noise generator.\\n\\n\\n2. Strengths\\n+ The method is based on adversarial training. As far as I know and as the authors note this is the only method that reliably leads to more robust models.\\n+ The authors attack their models with a range of attacks that to the best of my knowledge are state-of-the art.\\n+ The method apparently works in the multi norm setting.\\n\\n3. Weaknesses \\n- I was missing an intuitive description why the adversarial noise should improve robustness to adversarial attacks. I was only aware of it as a method to improve corruption robustness.\\n- I was not always sure if I got everything correctly in sections 4 and 5.3. I think I got it but I sometimes missed a figure. It may e.g. be helpful to include the losses in Figure 1 or make a separate figure. Especially why the MNG was trained the way it is was a bit unclear for me.\\n\\n\\n4. Recommendation\\n\\nI think this paper is an accept but as I don't work with adversarial examples I am not at all confident in that assessment. From the discussions with people who work on adversarial examples new defenses are usually broken very quickly and there is a number of papers which break numerous defenses. The method is however based on adversarial training which to my knowledge is the only robust method so far and the used attacks seem valid. So I am definitely leaning towards accept but the opinion of a real expert would be highly appreciated as I feel not at all qualified to assess the validity of papers on adversarial examples.\\n\\n\\n5. Questions/Recommendations\\n- Is there a difference between the M(eta)NG and the A(dversarial)NG from Rusak et. al. 2020?\\n\\n\\n6. Additional feedback \\n- None as the paper is pretty well written.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}"
]
} |
SQfqNwVoWu | Approximate Probabilistic Inference with Composed Flows | [
"Jay Whang",
"Erik Lindgren",
"Alex Dimakis"
] | We study the problem of probabilistic inference on the joint distribution defined by a normalizing flow model. Given a pre-trained flow model $p(\boldsymbol{x})$, we wish to estimate $p(\boldsymbol{x}_2 \mid \boldsymbol{x}_1)$ for some arbitrary partitioning of the variables $\boldsymbol{x} = (\boldsymbol{x}_1, \boldsymbol{x}_2)$. We first show that this task is computationally hard for a large class of flow models. Motivated by this hardness result, we propose a framework for $\textit{approximate}$ probabilistic inference. Specifically, our method trains a new generative model with the property that its composition with the given model approximates the target conditional distribution. By parametrizing this new distribution as another flow model, we can efficiently train it using variational inference and also handle conditioning under arbitrary differentiable transformations. Since the resulting approximate posterior remains a flow, it offers exact likelihood evaluation, inversion, and efficient sampling. We provide an extensive empirical evidence showcasing the flexibility of our method on a variety of inference tasks with applications to inverse problems. We also experimentally demonstrate that our approach is comparable to simple MCMC baselines in terms of sample quality. Further, we explain the failure of naively applying variational inference and show that our method does not suffer from the same issue. | [
"normalizing flow",
"probabilistic inference",
"variational inference",
"inverse problem"
] | Reject | https://openreview.net/pdf?id=SQfqNwVoWu | https://openreview.net/forum?id=SQfqNwVoWu | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"n2AhQYbKKO",
"8kECD84zaR0",
"C8NVrtaXvIf",
"sx1Vto5heiA",
"w0E_ve-AZ3_",
"q_uR5ECjeK8",
"ZOn9xHK3tP",
"uOeclzk31wf",
"Ghn-Bje5Np",
"3-gIYyX1gvd",
"TZi-LXY6X1X",
"sUFP8ex1yOW",
"Ps8inpQVSYq",
"OUOrPyW2k3j",
"STKnXsIl2l4"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040361680,
1606297707296,
1606133471666,
1605788565497,
1605787931903,
1605787275958,
1605786843466,
1605786540008,
1605785725670,
1605690871064,
1605184267251,
1603874849436,
1603804433959,
1603727071740,
1603716251645
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3705/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3705/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3705/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3705/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3705/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3705/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3705/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3705/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3705/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3705/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3705/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3705/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3705/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3705/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes a method for conditional inference with arbitrary conditioning by creating composed flows. The paper provides a hardness result for arbitrary conditional queries. Motivated by the fact that conditional inference is hard the paper therefore suggests a novel relaxation where the *conditioning* is relaxed.\\n\\nThere were various concerns from the reviewers regarding notation, comparison algorithms, and how the hardness result motivates the smoothing operation introduced. After careful study of the paper and all the comments I find that I am most concerned about the hardness result and how it motivates the smoothing operation that is done. Novel computational complexity results *as such* are not really in the scope of ICLR. There's nothing wrong with having such a result in a paper, of course, but a paper like this should be evaluated on the basis of the algorithm proposed.\\n\\nLike R4, I do not follow how this hardness result is meant to motivate the smoothing that's applied. The paper is unambiguous that the goal is to do conditional inference. A hardness result is presented for conditional inference, and so a relaxed surrogate is presented. This has a minor problem that it's not clear the relaxed problem avoids the complexity boundary of the original one. There's a larger problem, though. The hardness result has not been sidestepped! The goal is still to solve conditional inference. The algorithm that's presented is still an approximate algorithm for conditional inference. R4 suggests that other approximation algorithms should be compared to. The authors responded to this point, but I am not able to understand the response. For the same reason, I think it is valid to ask for comparison to other approximate inference algorithms (e.g. without smoothing)\\n\\nNone of the above is to say that the smoothing approach is bad. It may very well be. However, I think that either the existing argument should be clarified or a different argument should be given.\\n\\nFinally here are two minor points (These weren't raised by reviewers and aren't significant for acceptance of the paper. I'm just bringing them up in case they are useful.)\\n\\nIs Eq. 3 (proof in Appendix B.1) not just an example of the invariance of the KL-divergence under diffeomorphisms?\\n\\nProof in appendix B.2 appears to just a special case of the standard chain rule of KL-divergence (e.g. as covered in Cover and Thomas)\"}",
"{\"title\": \"Response to Additional Questions from R1\", \"comment\": \"We appreciate the detailed response from R1. We have made several updates to the manuscript based on the feedback. Our response to the additional questions are as follows.\\n\\n**1. Clarifying the main contribution of the paper** \\n\\nWe acknowledge your point and have updated the abstract accordingly -- please take a look. To reiterate, the main contribution of our work is the use of a flow-based pre-generator that works in conjunction with a flow base model, which enables straightforward likelihood-based training via VI. This results in an approximate posterior that is itself a flow model and offers efficient sampling, inversion, and likelihood evaluation, which existing approaches do not provide. \\n\\n**2. Meaning of \\\"computational flexibility\\\"** \\n\\nBy \\\"computational flexibility\\\", we are referring to the operations that can be done efficiently using a flow model, but not with other generative models. These are mentioned in the introduction:\\n\\n> \\\"Among them, normalizing flow models ... stand out due to their computational flexibility, as they offer efficient sampling, likelihood evaluation, and inversion.\\\"\\n\\nFor downstream tasks such as lossless compression or uncertainty quantification, having these properties is critical. We believe that this is a key benefit of our approach. For an example comparison, the actor network in [1] provides fast i.i.d. sampling, but inversion and likelihood evaluation are difficult. \\n\\n**3. Additional benefits of VI not covered by the authors** \\n\\nWe appreciate the constructive suggestion. We had mentioned the i.i.d. guarantee in our earlier response (see answer 3), but it was not added to the manuscript -- it has been added now to the list of contributions at the end of Section 1. The possibility of amortization is mentioned in the conclusion.\\n\\n\\n**4. MSE calculation** \\n\\nThanks for catching this -- the MSE calculation was missing averaging over the pixels and has now been fixed. We clarify that the error is calculated in the pixel space.\"}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"> While [2-4] share the general principle of leveraging the favorable geometry of the latent space, their main focus is on improving the mixing of Markov chains and does not benefit from the particularities of our setup, e.g. the invertibility of the base model.\", \"quoted_from_the_abstract\": \"\\\"\\\"\\\" We experimentally demonstrate that our approach outperforms Langevin Dynamics in terms of sample quality, while requiring much fewer parameters and training time compared to regular variational inference. \\\"\\\"\\\"\\n\\n[2-4] also use an invertible map to improve the geometry of the energy, just like how the proposed composed flow uses a pre-trained generator and composes it with a 'pre-generator'. The difference here is that the invertible map is given for free, much like the concurrent work VAEBM. This is the reason why I believe a comparison with LD and vanilla VI in the data space is a bit straw-man. Furthermore, it is reiterated in the response to R4 that \\\"the main contribution of our work is not the idea of performing inference in the latent space\\\"; however, the above quote seems to rely heavily on the benefit of performing inference in the latent space. I found this inconsistent. \\n\\nAlso, I agree [1] is fundamentally different from the proposed method, due to its adversarial nature. And I think we all agree it's relevant. But I don't see why \\\"[it] does not offer the same level of computational flexibility\\\". \\n\\n> Thus, our method makes the following trade-off: it gains the computational flexibility of flow models .... VI plays a key role here because it allows for a tractable likelihood-based optimization of the pre-generator.\\n\\nWhat computational flexibility exactly? Optimizing the proposal distribution via minimizing KL and sampling using Langevin-based MCMC are very similar in practice, as the latter can be seen as a non-parametric flow [*]. This suggests the evolution of the optimization of the proposal for VI & the Markov chain of Langevin-based MCMC are actually very similar to each other. \\n\\n[*] Langevin Dynamics as Nonparametric Variational Inference\\n\\nI do like the general idea of using a pre-trained likelihood-based model and use it to perform various inference tasks. And I think this is relatively under-explored in the literature. But the arguments (of the claimed contributions and novelty) are still a bit all over the place and cannot convince me it is ready for publication. I would be willing to raise my score, though, should the arguments be refined. \\n\\n\\u2014\", \"some_additional_benefits_of_vi_not_covered_by_the_authors_that_can_be_used_as_motivation\": \"Samples from the pre-generator is i.i.d. (whereas MCMC samples are potentially correlated, which might not be ideal). Another useful feature of VI is the fact that it can be amortized, and one can hope to generalize to unseen, future observations to condition on. But this is not explored in this work.\\n\\n\\u2014\\n\\nI've read the appendix for the additional discussion on the smoothing parameter. Thanks for the effort. It seems there is some optimization problem for small values of $\\\\sigma$. Perhaps a useful threshold of mean absolute error $\\\\leq 1/256$ can be used in practice, as it is the width of the 8-bit quantization bin. Also, how is the MSE larger than 1 for larger $\\\\sigma$ values? Is the error calculated in the logit space, or is the data rescaled some other way? \\n\\nThanks for the clarification on the hardness result.\"}",
"{\"title\": \"Response to Reviewer 4 (Part 2)\", \"comment\": \"**3. More baselines are needed. In particular, it would be useful to have baselines without likelihood smoothing and forward KL methods such as [1,3].**\\nWe thank R4 for suggesting additional references [1-3] that propose interesting solutions to difficult inference tasks. That said, we believe forward KL methods are not applicable in our setting, which we explain in the context of [3]. In [3], the authors get around the intractability of forward KL variational inference by performing joint-contrastive VI, from which the forward amortized VI (FAVI) loss (equation 6 in [3]) is derived. Written using notations from our setup, this loss is $L_{FA} = \\\\mathbb{E}_{p(x_1,x_2)}[-\\\\log q(x_2 \\\\mid x_1)]$.\\n\\nThere are two important distinctions from our approach. First, this loss is only valid for the amortized version of our setup where we wish to train a single variational posterior $q$ for all observations $x_1^*$. Indeed Section 4.1 of [3] mentions that FAVI can be derived as an amortization of the stochastic forward VI loss $D_{\\\\rm KL} (p(x_2 \\\\mid x_1) \\\\Vert q(x_2 \\\\mid x_1))$. This is different from our approach based on stochastic VI, which doesn't suffer from amortization gap [4]. Second, the FAVI formulation does not leverage the invertibility of the base model. In the extreme case where we condition on a degenerate, constant observation $T(x) = c$, the minimizer of FAVI loss is $p(x)$ itself: $p(x) = \\\\arg\\\\max_q D_{\\\\rm KL} (p(x \\\\mid c) \\\\Vert q(x \\\\mid c))$. Thus FAVI attempts to distill the base model using the variational posterior, which can be a challenging optimization task. For our method, this corresponds to the trivial task of $\\\\hat{f}$ learning to represent the identity function. While it may be possible to adapt [3] to leverage the base model, we believe this is outside the scope of our work.\\n\\n**References** \\n[1] Papamakarios, George, and Iain Murray. \\\"Fast \\u03b5-free inference of simulation models with bayesian conditional density estimation.\\\" Advances in Neural Information Processing Systems. 2016. \\n[2] Le, Tuan Anh, Atilim Gunes Baydin, and Frank Wood. \\\"Inference compilation and universal probabilistic programming.\\\" Artificial Intelligence and Statistics. PMLR, 2017. \\n[3] Ambrogioni, Luca, et al. \\\"Forward amortized inference for likelihood-free variational marginalization.\\\" The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019. \\n[4] Cremer, C., Li, X. & Duvenaud, D.. (2018). Inference Suboptimality in Variational Autoencoders. Proceedings of the 35th International Conference on Machine Learning, in PMLR 80:1078-1086\"}",
"{\"title\": \"Response to Reviewer 4 (Part 1)\", \"comment\": \"We thank R4 for the detailed and constructive feedback with pointers to additional references. Below we include our response to the concerns raised by R4.\\n\\n**1. Clarification of the main contribution of the paper.** \\nAs mirrored in our response to R1, we clarify that the main contribution of our work is not the idea of performing inference in the latent space. Rather, our key idea is the use of a pre-generator **parametrized as a flow**, which can then be trained via variational inference by exploiting the invertibility of the base model. We hope that Section 5 (Related Works) in the updated manuscript makes this point clearer, and also refer to our first response to R1 for additional context.\\n\\n**2. Implications of the hardness proof and the necessity for smoothing.** \\nAs R3 mentioned, we believe there may be a misunderstanding around the implications of our hardness result. We clarify a few points below.\\n\\n> Hardness of exact conditioning is only a valid motivation for using approximate inference, not for adopting an approximate likelihood.\\n\\nWhile this statement is true if we only assume Theorem 1, our hardness result goes a bit further. As stated in Section 3 and in Corollary 3 (Appendix A), even **approximating** (w.r.t total variation distance) the true conditional distribution $p(x_2 \\\\mid x_1=x^*)$ is hard as long as we require exact conditioning. This is what motivates our relaxation to allow approximate conditioning via smoothing, i.e. we aim to learn $p(x_2 \\\\mid x_1 \\\\approx x^*)$ (note the change of $x_1=x^*$ to $x_1 \\\\approx x^*$). We have updated Section 3 to further emphasize this point.\\n\\n> It seems that the authors want to get a least-square loss component in the pixel space.\\n\\nGiven the above motivation for approximate conditioning, the particular choice of smoothing scheme is a hyperparameter that needs to be chosen. We have experimented with Gaussian smoothing as well as non-Gaussian smoothing, such as the energy-based kernel implicitly defined by the LPIPS distance $p(\\\\tilde{x}_1 \\\\mid x_1) \\\\propto \\\\exp(-\\\\text{LPIPS}(\\\\tilde{x}_1, x_1))$. The intuition was that we could achieve better sample quality by relying on a perceptual metric. However there was no appreciable difference between the two in our preliminary experiments, so we chose to use Gaussian smoothing simply because it was faster to run experiments with. Thus, the least square term in the loss was a result of using Gaussian smoothing, not a choice that motivated the use of Gaussian smoothing. It is possible that better performance can be achieved with more extensive search over the smoothing distribution, but we leave that investigation for future work.\\n\\n> Without smoothing you get a least square loss in the latent space which is likely to be much more appropriate. Gaussian smoothing is neither well motivated nor supported experimentally.\\n\\nWe appreciate your suggestion, but it's unclear to us how not smoothing the observation would lead to a least square loss in the latent space. In the general case of conditioning under the transformation $y=T(x)$, the latent VI loss without smoothing simplifies to $D_{\\\\rm KL} (p_{\\\\hat{f}}(z) \\\\Vert p_f(z \\\\mid y=y^*)) = D_{\\\\rm KL}(p_{\\\\hat{f}}(z) \\\\Vert p_f(z)) - \\\\mathbb{E}_{z \\\\sim q}[\\\\log p(y=y^* \\\\mid z)] + \\\\log p(y=y^*)$, which to the best of our knowledge does not contain a least square term. Moreover, the second term is particularly problematic for optimization because $T \\\\circ f$ is a deterministic function and $\\\\log p(y=y^* \\\\mid z)$ is degenerate, i.e. it is undefined for all $z$ such that $T(f(z))$ does not exactly match the observation $y^*$. We believe this, combined with the hardness result, sufficiently motivates the use of smoothing.\\nIf there is any misunderstanding on our part on this point, we'd greatly appreciate a clarification.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank R3 for the thoughtful and positive feedback. We are glad that you found our work valuable, and have incorporated the suggestions in the updated manuscript. More detailed response to the questions raised by R3 are included below.\\n\\n**1. Effect of the base model on the performance of the overall method.** \\nWe are glad that R3 pointed this out, as we also think that this is an important aspect we didn't get to investigate further. While our setup assumes that the base model $p_f(x)$ is correct, in practice there will always be a mismatch between $p_f(x)$ and $p_{\\\\rm true}(x)$. Interestingly, all our experiments were done using images from the test set, i.e. samples from $p_{\\\\rm true}(x)$, not the base model $p_f(x)$. The fact that resulting conditional samples look reasonable shows some level of robustness of our method to model mismatch. We'd like to probe deeper into this by repeating our experiments using multiple base models of varying quality (e.g. measured by test set bpd), but we leave it as future work for now.\\n\\n**2. It's not immediately clear why $y=T(x)$ can be substituted for the conditioner $x$ in $p_{\\\\sigma} (\\\\tilde{y} = y^\\\\* \\\\mid x)$.** \\nThis is simply the result of rewriting the conditioning on $x$ in terms of the observed variable $y \\\\triangleq T(x)$. To see this, notice that by definition the density $p(y \\\\mid x)$ is the Dirac delta function $\\\\delta_{T(x)}(y)$ concentrated at $T(x)$. Thus $p_\\\\sigma (\\\\tilde{y}=y^* \\\\mid x) = \\\\int_{y} p_\\\\sigma(\\\\tilde{y}=y^* \\\\mid y) p(y \\\\mid x) \\\\mathrm{d}y = \\\\int_{y} p_\\\\sigma(\\\\tilde{y}=y^* \\\\mid y) \\\\delta_{T(x)}(y) \\\\mathrm{d}y = p_\\\\sigma(\\\\tilde{y}=y^* \\\\mid y=T(x))$, where the first equality is true from the conditional independence between $\\\\tilde{y}$ and $x$ given $y$ (as shown in Figure 2c). We clarify this in the updated proof.\\n\\n**3. The variable lower-case $m$ in the hardness proof is never defined. Also the notation for expectation is inconsistent in the proof of equation 3.** \\nThank you for catching these. The lowercase $m$ refers to the number of clauses in the given conjunctive normal form, and we define this in the updated manuscript. We also fixed the inconsistent notation for expectation in the proof of equation 3.\"}",
"{\"title\": \"Response to Reviewer 1 (Part 2)\", \"comment\": \"**5. Generated samples not satisfying the hard constraints is not desirable. Is it possible to anneal the $\\\\sigma$ while training the latent flow so that it will concentrate on the (potentially degenerate) solution that satisfies these constraints?**\\nWhile it is certainly desirable to generate samples that match the hard constraints perfectly, being able to do so in practice is very unlikely given our hardness result. We have nonetheless experimented with different values of $\\\\sigma$ as well as annealing it to a small positive target variance (since the loss is undefined when $\\\\sigma = 0$). Empirically, we observed that annealing had no benefit compared to simply training with the target variance from the start. For very small values of $\\\\sigma$ (e.g. $\\\\sigma < 1\\\\mathrm{e}{-4}$), pixelwise variances for the observed portion were either zero or very low. As shown in Figure 8a, the smoothed observation $\\\\tilde{x}_1$ is visually indistinguishable from $x_1$ for small values of $\\\\sigma$. These findings are summarized in Appendix C.2, with samples at different values of $\\\\sigma$ and plots of pixelwise variance as well as MSE of the observed portions (see Figure 8).\\n\\n**6. Could the hardness result be derived from the result on general Bayesian Belief Network?** \\nThe classical paper [8] shows that inference is NP-hard for general directed graphical models, where a probability table is specified for each variable conditioned on its parents.\\nBecause these conditional distributions need complexity exponential in the number of parents configurations to describe, directed graphical models limit the in-degree of variables (e.g. to logarithmic in the total number of variables).\\nThus directed graphical models for which inference is hard necessarily have conditional independence assumptions, while deep generative models do not exhibit such conditional independence unless explicitly enforced. Therefore hardness results for one family of distributions do not transfer to the other.\\n\\n**7. Is the conditional distribution in the hardness resulting referring to $p(x_2 \\\\mid x_1)$?** \\nYes, Theorem 1 refers to sampling from the condition distribution $p(x_2 \\\\mid x_1)$. We updated the theorem statement to be more precise. Note that our general formulation under differentiable transformation includes this as a special case where $T(x) = x_1$ and sampling procedure simply takes the $x_2$ portion from the joint samples conditioned on the observation.\\n\\n**8. The discussion of hardness seems to be used to motivate the relaxation of the hard constraints (of the givens). It doesn\\u2019t seem that relevant when the observation is not a deterministic function of x (e.g. compressed sensing).** \\nOur formulation is indeed specific to deterministic forward operators. We focus on this setting as it already encompasses a large set of tasks, including noiseless compressed sensing, inpainting, super-resolution, phase retrieval, and numerous other inverse problems. It is an established topic in both classical and deep inverse problem literature (e.g. see [5-7]). Extending our method to noisy/stochastic forward operators would certainly be an interesting research direction, but we agree with R3 that it is out of scope of this work.\\n\\n**References**\\n\\n[1] Jesse Engel, Matthew Hoffman, and Adam Roberts. Latent constraints: Learning to generate conditionally from unconditional generative models. arXiv preprint arXiv:1711.05772, 2017. \\n[2] Matthew D Parno and Youssef M Marzouk. Transport map accelerated markov chain monte carlo. SIAM/ASA Journal on Uncertainty Quantification, 6(2):645\\u2013682, 2018. \\n[3] Matthew Hoffman, Pavel Sountsov, Joshua V Dillon, Ian Langmore, Dustin Tran, and Srinivas Vasudevan. Neutra-lizing bad geometry in hamiltonian monte carlo using neural transport. arXiv preprint arXiv:1903.03704, 2019. \\n[4] Erik Nijkamp, Ruiqi Gao, Pavel Sountsov, Srinivas Vasudevan, Bo Pang, Song-Chun Zhu, andYing Nian Wu. Learning energy-based model with flow-based backbone by neural transport mcmc. arXiv preprint arXiv:2006.06897, 2020. \\n[5] Ashish Bora, Ajil Jalal, Eric Price, and Alexandros G Dimakis. Compressed sensing using generativemodels. InInternational Conference on Machine Learning, pp. 537\\u2013546. JMLR. org, 2017. \\n[6] Lynton Ardizzone, Jakob Kruse, Sebastian J. Wirkert, Daniel Rahner, Eric W. Pellegrini, Ralf S. Klessen,\\nLena Maier-Hein, Carsten Rother, and Ullrich K\\u00f6the. Analyzing inverse problems with invertible neural\\nnetworks. CoRR, abs/1808.04730, 2018. URL http://arxiv.org/abs/1808.04730. \\n[7] Muhammad Asim, Ali Ahmed, and Paul Hand. Invertible generative models for inverse problems:\\nmitigating representation error and dataset bias. CoRR, abs/1905.11672, 2019. URL http://arxiv.\\norg/abs/1905.11672. \\n[8] Gregory Cooper. The Computational Complexity of Probabilistic Inference Using Bayesian Belief Networks. Artificial Intelligence, Volume 42, Issue 2-3, pp. 393-405, 1990.\"}",
"{\"title\": \"Response to Reviewer 1 (Part 1)\", \"comment\": \"We thank R1 for the insightful and detailed feedback. We enjoyed going over the additional references and updated our manuscript accordingly. Below we include our detailed response to the concerns and questions raised by R1.\\n\\n**1. Incremental novelty given the missing references in discussion of prior works.** \\nAgain we very much appreciate the references [1-4] relevant to our work. The revised manuscript now includes a discussion of the said papers in Section 5.\\n\\nRegarding the concerns for novelty, we emphasize that the key contribution of our work is the use of a **flow-based** pre-generator that is trained with a likelihood-based objective by exploiting the **invertibility** of the base model. Importantly, the resulting conditional sampler retains the computational flexibility of flow models and can be used for tasks (e.g. compression) that require exact conditional likelihood or inversion. This is clearly different from the setting of [1] where an **adversarially** trained actor network is composed with a **non-invertible** generator and therefore does not offer the same level of computational flexibility. The updated manuscript clearly delineates these distinctions from [1]. We also agree with R3 that the references [2-4] are only marginally relevant to our work. While [2-4] share the general principle of leveraging the favorable geometry of the latent space, their main focus is on improving the mixing of Markov chains and does not benefit from the particularities of our setup, e.g. the invertibility of the base model.\", \"to_reiterate\": \"Our method employs VI to learn the variational posterior, which is itself a flow model and provides efficient sampling, inversion, and likelihood estimation -- tasks that are much slower or more difficult with MCMC methods. Thus, our method makes the following trade-off: it gains the computational flexibility of flow models with reasonable empirical performance at the cost of losing asymptotic guarantees of MCMC methods and limiting the variational family to flow models. VI plays a key role here because it allows for a tractable likelihood-based optimization of the pre-generator.\\n\\n**2. Bits-per-dim and unconditional samples of the pre-trained flow models.** \\nWe added test set BPD as well as unconditional samples from all three base models used in our experiments in Appendix C.3.\\n\\n**3. Baselines are weak and can be improved by deriving a kernel in the latent space as per [3].** \\nWe agree with R1 that the MCMC-based baselines can be further improved. We again note that the point of our experiments is not to show that our approach can generate better samples, as explicitly mentioned in the introduction: \\\"... VI allows for likelihood evaluation and fast sampling, but at a _lower sample quality_ compared to MCMC counterparts\\\". Given sufficient mixing time with a well-tuned kernel, we indeed expect MCMC methods to produce better samples than our VI-based approach, whose optimality relies on optimizing a loss function parametrized by a complex neural network. Rather, our experiments provide an empirical evidence that our approach can achieve reasonable sample quality competitive to simple MCMC methods, while retaining other computational benefits of flow.\\n\\nWe also point out that our method makes a fundamentally different trade-off in terms of practical run time when generating i.i.d. samples from the posterior. MCMC methods take time linear in the number of samples per observation due to mixing and autocorrelation, but multiple chains can be easily run in parallel for a batch of observations. Our method takes time linear in the number of observations because each observation requires training a separate pre-generator. But once trained, sampling is essentially instant as each sample only requires a single forward-pass of the composed network $f\\\\circ \\\\hat{f}$, with the benefit of generated samples guaranteed to be i.i.d.\\n\\n**4. Why do the samples in Fig. 4 have the same stroke style? Is it an indicator of the mode seeking behavior of the reverse KL?** \\nThis is an interesting observation, and we thank R1 for pointing this out. We believe this was due to two reasons: (1) the MNIST classifiers were not calibrated and overfit to certain stroke styles, and (2) we used a very conservative smoothing parameter of $\\\\sigma = 0.02$. We ran another version of this experiment with $\\\\sigma = 0.1$ and employed early stopping to avoid overfitting to the classifier output. The result can be seen in the updated Figure 4a, with samples showing much more diversity.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank R2 for the thoughtful and positive feedback. As suggested, we have updated the manuscript to avoid overloading of notations and improve the clarity and quality of our writing. Below we include our detailed response to the questions raised by R2.\\n\\n**1. Notational clarification and concerns for marginalization.** \\nWe believe the main source of confusion is the overloading of $\\\\hat{f}$ in two different contexts. For our method, $\\\\hat{f}: \\\\epsilon \\\\mapsto z$ denotes the invertible mapping for the pre-generator $p_{\\\\hat{f}}$. Thus our variational posterior is the composed model $p_{f \\\\circ \\\\hat{f}}(x)$, and we train $\\\\hat{f}$ such that $p_{f \\\\circ \\\\hat{f}}(x)$ approximates the intractable conditional $p_f(x \\\\mid \\\\tilde{x}_1 = x_1^*)$. \\n\\nImportantly, since the samples from $p_{f \\\\circ \\\\hat{f}}(x)$ are obtained via $z \\\\sim p_{\\\\hat{f}}, x = f(z)$, we can't efficiently compute the _marginal_ likelihood $p_{f \\\\circ \\\\hat{f}}(x_2)$ due to having to integrate out $x_1$. This is why we use a modified VI objective that minimizes the KL between the variational and true **joint** distributions, i.e. $D_{\\\\rm KL}(p_{f \\\\circ \\\\hat{f}}(x) \\\\Vert p_f(x \\\\mid \\\\tilde{x}_1 = x_1^*)$. Note that while we cannot compute the marginal likelihood $p_{f \\\\circ \\\\hat{f}}(x_2)$, we can still sample from it by first sampling $x \\\\sim p_{f \\\\circ \\\\hat{f}}(x)$ and only taking the $x_2$ portion. But as noted by R2, this concern vanishes in the more general formulation with a differentiable transformation $T$.\\n\\nIn the case of Ambient VI, we overloaded the notation and let $\\\\hat{f}: \\\\epsilon \\\\mapsto x$ denote the mapping for the variational posterior such that $p_{\\\\hat{f}}(x)$ directly approximates $p_f(x \\\\mid \\\\tilde{x}_1 = x_1^*)$. We realize that this notation is confusing and updated the manuscript to use $p_g$ to denote the variational posterior for Ambient VI (reflected in equations 2 and 6). We also added a pseudocode (Algorithm 2) in Appendix C for the sampling procedure for the partitioned case and the general case.\\n\\n**2. Is $p_f(x_2 \\\\mid \\\\tilde{x}_1 = x_1^*)$ approximated with $p_f(x \\\\mid \\\\tilde{x}_1 = x_1^*)$?** \\nNo. As explained in our response above, we approximate the true conditional joint distribution $p_f(x \\\\mid \\\\tilde{x}_1 = x_1^*)$ with our variational distribution $p_{f \\\\circ \\\\hat{f}}(x)$.\\n\\n**3. Minor concerns and suggestions** \\nThank you for the suggestions. We have incorporated your suggestions into Sections 1 and 2 for clarity and improved writing.\"}",
"{\"title\": \"Justification of the score and explanation of relevancy\", \"comment\": \"Thank you for taking your time to read the other reviews including mine.\\n\\nRef [2-3] are prior work using a flow to improve the geometry of the energy for performing gradient-based MCMC. I deem them relevant because this work proposes to compose a latent flow with a trained flow generator to perform inference, which is equivalent to performing MCMC in the latent space. The fact that the latent flow can outperform the ambient VI or ambient Langevin MCMC is not surprising, as this problem has been demonstrated and mitigated in the cited work. This calls the novelty of this paper into question (contribution (A) in my label), which is why I requested stronger baselines in the experiment section; and it also supports the following quote from R4: \\n\\n> The main idea behind the method is to perform inference in the latent space, this in my opinion is not a noteworthy contribution as it is just the obvious way to do inference in this setting. \\n\\nOne could argue performing latent VI is not exactly the same as performing latent space MCMC. But then a better motivation of why VI is needed should be provided, e.g. the option to perform amortization to speed up inference (which is not conducted in this work). \\n\\n**Smoothing**: I did not say smoothing is unnecessary. I understand without the smoothing the hard constraints will result in degenerate target distributions. I only mentioned the softened constraints might not be desirable and asked if it's possible to anneal it away (for example, so that the border of the inpainted images will look closer to that of the original images).\"}",
"{\"title\": \"Updated score after checking reviewer references\", \"comment\": \"After reviewing the citations provided by reviewer 1 (in particular reference [1] which I had not previously seen), I must agree that the novelty of the authors' approach is more questionable than I had originally thought. Thus, the proposed use of a flow as a \\\"pre-generator\\\" is more incremental than innovative.\\n\\nI do not think that references 2, 3, and 4 are as relevant as reviewer 1 suggests.\\n\\nI do not agree with objections from reviewers 1 and 4 that the smoothing is unnecessary. The relaxation of the conditional inference problem follows naturally from the authors' theoretical result proving the hardness of conditional inference under arbitrary partitions of variables. This makes intuitive sense as well since a fixed (non-smooth) observation will induce a degenerate conditional distribution. Reviewer 1 points out that it might not be relevant in the case when the observation is a non-deterministic function of x. This appears to me to be outside of the scope of this work and an irrelevant objection.\\n\\nIn light of this, I am updating my review score to a 7. I still believe this is a good paper and should be seriously considered for acceptance. This is my final score, and I will defend it as necessary.\"}",
"{\"title\": \"I like the motivation of the problem and the solution. There is also a commendable effort to collect empirical evidence. Some reservations; can benefit from a second review round.\", \"review\": \"### Summary:\\n \\nThis paper uses Variational Inference to query pre-trained flow-models. If the flow variable is $x$, then querries are either conditioned on the part of $x=(x_1,x_2)$, or on a differentiable transformation of $x$. Authors first show that it is not trivial to conduct such queries exactly or approximately for a general class of flow-models. The paper then proposes a framework that affords such querries by working in the latent space--empirical evidence favors the proposed approach over contemporary methods. \\n\\n### Strength:\\n\\nThe problem is well-motivated. The authors outline several instances where one would want to work with a pre-trained model and use it to query over new data. Further, I appreciate the use of proof that the problem is hard to motivate the solution. I also commend the author's effort to collect empirical evidence for their method. I especially like the \\\"Why Ambient VI fails\\\" explanation and find the contour plot beautiful. \\n\\n\\n### Concerns:\\n\\nOne primary with paper is the lack of clarity and overload of notation. This is especially true for section 3.\\n\\nThe equations 1 and 2, use the distribution $p_{f\\\\circ \\\\hat{f}}(x_2)$ and $p_{\\\\hat{f}}(x_2)$. The preceding section uses $p_{f\\\\circ \\\\hat{f}}(x)$ and $p_{\\\\hat{f}}(x)$ for $x = (x_1, x_2)$; it is possible that the authors are referring to the marginal for $x_2$. However, it is not immediately clear from the discussion. Further, there is reference to $y$ which is undefined till that point. More so, if we are talking about marginals, then this necessiates the need to evaluate these marginals. The authors offer no discussion on this. \\n\\nIn my understanding of the work, $p_f (x_2| \\\\tilde{x_1} = x_1^*)$ is approximated with $p_f (x| \\\\tilde{x_1} = x_1^*)$. Thereafter, we can use $p_f (x| \\\\tilde{x_1} = x_1^*) \\\\approx p_f (x_2,x_1) p_\\\\sigma (x_1^*|x_1)$ to calculate the eq 1. However, this still leaves me uncertain about the marginal distribution $p_{f\\\\circ \\\\hat{f}}(x_2)$. I also believe if the above explanation is true, then it is bit of a leap.\\n\\nHowever, these concerns vanish for the section with differentiable transformations--here, we do not talk about the partitioning of x, so the expressions are straightforward to evaluate. \\n\\nThe easiest way to convince me would be to offer a clear explanation of the method and to straighten out the notation. It will be great to get an algorithm--like the one in Appendix C--for the partitioning case. \\n\\n#### Minor concerns and suggestions:\\n\\nIn section 2, the last paragraph, the formulation is not unique to the authors' framework. It is the fundamental idea of VI to use ELBO over KL divergence; however, the current presentation makes it feel that this is a novel observation made by the authors. \\n\\nSection 1, third paragraph: \\\"VI allows for likelihood evaluation and..\\\"--I believe the term likelihood has been used to refer to $\\\\log q$--this is confusing as the term is often reserved for $\\\\log p(x)$ or $\\\\log p(x|z)$. I will suggest being unambiguous with terminology.\\n\\nSection 1, fourth paragraph: \\\"Specifically, we use variational inference to\\nlearn a distribution in the latent space ...\\\"--I find this sentence hard to parse. How is a distribution \\\"fed\\\" to the pre-trained model? (this is more of a writing concern than anything else.)\\n\\nAuthors can consider using less left margin for bullet points under the heading \\\"our contributions.\\\"\\n\\n### Update after the rebuttal\\n\\nI think that the ideas in this paper are interesting and can inspire new uses. All of us agree that the problem presented here was important, and there is a lot of work to be done in this domain. However, after reading the discussion with other reviewers and their reviews, I believe the manuscript can benefit from another review round. Specifically, the authors can benefit from a thorough revision of the claims in the paper. Further, I would encourage authors to at least investigate how naive amortization approaches fair (irrespective of the result, authors will develop a stronger case for their line of work.)\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Solving interesting inference problems with existing tricks\", \"review\": \"Summary: The paper proposes to solve the conditional inference problem by performing a relaxed version of variational inference in the prior space of the flow-based model. The model p(x) is pretrained, and one is interested sampling from p(x|observation). The observation could be some subset of x (inpainting), gray scale representation of an image (colorization), lower-res representation (super resolution), or noisy version of the data (compressed sensing). The paper proposes to perform inference in the prior space, by composing the post-hoc trained latent flow with the trained invertible decoder, as the conditional distribution in that space is believed to have a better geometry.\", \"contributions_and_novelties\": \"(A) propose to perform inference in the latent space of a latent variable model to side step the bad geometry, (B) propose to replace hard constraint with stochastic relaxation (placing an additional likelihood term to model the dummy variable). (C) Applications seem interesting for testing the quality of an unconditional flow-based model.\", \"flaws\": \"missing several important references in the discussion to prior works. These include [1], which describes a more general framework to post-hoc perform sampling from a conditional distribution of a learned latent variable model by fitting a distribution in the latent space; [2,3,4] which propose to mitigate the bad geometry of the learned data energy (in this case, the density model itself) by transforming it into a space where it\\u2019s more Gaussianized. Similar idea has also been incorporated in [5] to enable the training of a residual EBM (no need to cite). The key novelties (A and B) are incremental in nature given the above related works that are not cited.\\n\\n\\n[1] Latent Constraints: Learning to Generate Conditionally from Unconditional Generative Models, 2017\\n[2] Transport map accelerated markov chain Monte Carlo, 2014\\n[3] NeuTra-lizing Bad Geometry in Hamiltonian Monte Carlo Using Neural Transport, 2019\\n[4] Learning Energy-based Model with Flow-based Backbone by Neural Transport MCMC, 2020\\n[5] VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models, 2020\", \"additional_details\": \"For experiments, please report the bits-per-dim (BPD) of the pre-trained flow model, as well as unconditional samples for reference and better comparison. \\n\\u201cAmbient\\u201d langevin dynamics is a very weak baseline, since one can easily improve the mixing via deriving a kernel in the latent space (as per [3]). It wouldl be a more fair comparison since the proposed method also composes with the learned decoder flow. Same goes to the naive implementation of the PL-MCMC which only uses a random walk kernel, which is a very weak baseline. \\nAre the rows of Fig 4 independent? Why do they have the same stroke style if they are all conditioned only on the label only? Is it an indicator of the mode seeking behavior of the reverse KL? \\nIn the qualitative results (tab 2, fig 3 and fig 5), the generated samples do not really satisfy the hard constraints that they are conditioned on (e.g. the subsets of x for inpainting), possibly due to the relaxation via the dummy variable. This is not desirable. Is it possible to anneal the $\\\\sigma$ while training the latent flow so that it will concentrate on the (potentially degenerate) solution that satisfies these constraints? \\nThe authors claim the hardness result is surprising. It has been long shown that sampling from a general Bayesian Belief Network is NP-hard [6]. Can\\u2019t the same conclusion be derived from that, with the main difference being the explicit parameterization via a coupling flow? \\n\\n[6] The Computational Complexity of Probabilistic Inference Using Bayesian Belief Networks, Cooper, 1990\\n\\n\\n\\nAdditional questions about the hardness result (I didn\\u2019t read the proof):\", \"generality_of_the_hardness_result\": \"the statement is about the hardness of sampling from the conditional distribution. What is the conditional distribution in this context, is it referring to p(x2|x1) (i.e. observation is a subset of x)? Please be precise. If this is the case it is consistent with the presentation of the previous section (VI). However the proposed method seems to require a more general treatment to take account of the other tasks, e.g. inverse problems.\\n\\nThe discussion of hardness seems to be used to motivate the relaxation of the hard constraints (of the givens). It doesn\\u2019t seem that relevant when the observation is not a deterministic function of x (e.g. inpainting, coloration, etc). For compressed sensing for example, the likelihood p(observation|x) naturally exists and is non-degenerate (by assumption). The presentation seems a bit confusing. \\n\\n\\n--- POST REBUTTAL ---\\n\\nModified my score after the rebuttal, since (1) I believe the re-purposing achieved by this work can potentially broaden the applicability of flow-based generative model (2) the authors have toned down the abstract and clarified the contributions in the intro, which now better reflects the value of the work. \\n\\nI am still leaning towards rejection at the end given the limited originality of the proposed method and the lack of a more comprehensive discussion of different possible approaches, but as means to the same end. For example, the relaxed inference problem can be solved with an MCMC method. These should all be discussed and compared if the contribution is about repurposing a joint likelihood model using flows.\\n \\n\\nPS. the last line (the references) of the last page might have been a mistake.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A solid contribution to the field of normalizing flows and conditional generative models\", \"review\": \"In this work, the authors propose a novel method of estimating conditional distributions over arbitrary partitions of variables $x = [x_1,x_2]$ using an existing pre-trained flow model for $p(x)$. Their method fits a new *pre-generator* flow $\\\\hat{f}$ for each observation which maps from a base distribution $\\\\epsilon \\\\sim N(0,\\\\mathbb{I})$ to the latent variables $z \\\\sim N(0,\\\\mathbb{I})$ where the mapping $z \\\\leftrightarrow x$ is learned by the pre-trained \\\"base\\\" flow $f$. The result is a mapping $\\\\hat{f}$ which learns to shift probability mass to regions of the latent space which correspond to the conditional distribution of the given observation. The authors present comprehensive experimental results which show a clear improvement over existing methods for conditional inference but with the drawback of needing to re-train the pre-generator for each individual observation.\", \"pros\": [\"Very well written, clear, easy to understand\", \"The proposed method is intuitive and well defined\", \"Placement of the method relative to recent work is very clearly explained\", \"Comprehensive theoretical analysis including an interesting hardness result, which is uncommon for the deep learning literature\", \"Comprehensive and convincing empirical analysis with clear results\"], \"cons\": [\"There is very little discussion on how the construction of the base generator $f$ affects the results of the proposed method\", \"The proof of hardness is somewhat opaque and feels contrived; but this is often the case with hardness proofs!\", \"The method has a clear weakness in needing to be retrained for each observation. However, this is clearly stated by the authors and left open as a direction for future work.\", \"Overall, I think this is an exceptional paper which makes a significant contribution to the field. I think it is suitable to accept as-is with only a few minor adjustments which I will enumerate below.\", \"1. It would be nice (but not absolutely necessary) to see some discussion regarding the construction of the base generator, as I mentioned in the Cons above; e.g. does the performance of this method depend significantly on the user's choice of base model? Intuitively, I would think so.\", \"2. A few notes on the proofs:\", \"The variable lower-case $m$ shows up in several spots in the hardness proof but is never defined. Perhaps these are typos and you meant to write $M$?\", \"In the proof for equation 3, the notation for expectations (i.e. $\\\\mathbb{E}$) is inconsistent in a few places. Presumably just typos.\", \"I may be missing something, but it's not immediately clear why $y=T(x)$ can be substituted for the conditioner $x$ in $p_{\\\\sigma}(\\\\tilde{y}=y^*|x)$. My immediate intuition is that this would only be valid if $T$ is injective, otherwise this may change the underlying conditional density. Please correct me if I am wrong, and preferably add a clarification to the proof as to why this is justified.\", \"Congratulations to the authors on a job well done!\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A standard use of stochastic variational inference with a not motivated use of likelihood smoothing\", \"review\": \"Summary:\\nThis paper is concerned with the approximation of conditional densities in trained normalizing flow models. The author use a variational Bayesian approach to estimate the conditional density of a set of variables given another set. As use cases, the authors present results in image inpainting and colorization. The paper also contains a theoretical result showing that exact conditioning is NP hard in additive normalizing flow models.\", \"pros\": [\"Relevant problem.\", \"Possibly interesting theoretical result.\", \"Interesting range of experiments with careful analysis of both quantitative and qualitative results. Large and appropriate list of performance metrics\"], \"cons\": [\"Low originality, the main idea to perform inference in the latent space is obvious\", \"The relevance of the theoretical result in justifying the rest of the paper is not clear and not enough space is given to its explanation.\", \"The smoothing of the likelihood using a very rough noise model is not required\", \"More baselines are needed. In particular, it would be useful to have baselines without likelihood smoothing and also comparison with (simulation based) forward KL methods such as [1,3].\"], \"relevance\": \"The problem of conditioning in generative models is highly relevant in our current ML environment as it allows to convert generators into very flexible inference machines capable of solving a large variety of problems.\", \"originality\": \"With the possible exception of the NP-hardness theorem, the original contributions of this work are very limited with some questionable elements (see below). Conditioning deep differentiable models is a standard domain of application of variational Bayesian inference and the authors use a very standard and natural approach. The main idea behind the method is to perform inference in the latent space, this in my opinion is not a noteworthy contribution as it is just the obvious way to do inference in this setting. The second methodological trick is to use a Gaussian smoothing of the likelihood. To my understanding, this procedure is neither well motivated nor supported experimentally.\", \"major_concerns\": \"I could be missing something but I do not understand why the authors are smoothing the likelihood since the flow already give a perfectly well-behaved joint model. The authors also seem to motivate this choice using their theorem showing that exact conditioning is NP hard. However this is only a valid motivation for using approximate inference, not for adopting an approximate likelihood. It seems that the authors want to get a least-square loss component in the pixel (or add-hoc feature) space. Without smoothing you will get a least square loss in the latent space which is likely to be much more appropriate. Therefore, I am not convinced that this work better in practice and it should at least be tested experimentally.\", \"paper_structure\": \"In general, the paper is well structured with a clear narrative and an appropriate amount of background material. \\nHowever, the treatment of the theorem in section 3 should be expanded. The experiment section is very well structured and the analysis of the results is very good and definitely above average. I am pleased that the authors took the time to analyze the bad performance of the native VI method.\", \"writing\": \"The paper is very clearly written. \\n\\nLiterature\\nThe coverage of the literature is appropriate. However, the author should also discuss methods based on synthetic sampling and forward KL divergence as they are a very viable approach to generative model conditioning in the latent space. For example:\\n\\n[1] Papamakarios, George, and Iain Murray. \\\"Fast \\u03b5-free inference of simulation models with bayesian conditional density estimation.\\\" Advances in Neural Information Processing Systems. 2016.\\n[2] Le, Tuan Anh, Atilim Gunes Baydin, and Frank Wood. \\\"Inference compilation and universal probabilistic programming.\\\" Artificial Intelligence and Statistics. PMLR, 2017.\\n[3] Ambrogioni, Luca, et al. \\\"Forward amortized inference for likelihood-free variational marginalization.\\\" The 22nd International Conference on Artificial Intelligence and Statistics. PMLR, 2019.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
A7-rYAC-np1 | Syntactic representations in the human brain: beyond effort-based metrics | [
"Aniketh Janardhan Reddy",
"Leila Wehbe"
] | We are far from having a complete mechanistic understanding of the brain computations involved in language processing and of the role that syntax plays in those computations. Most language studies do not computationally model syntactic structure, and most studies that do model syntactic processing use effort-based metrics. These metrics capture the effort needed to process the syntactic information given by every word (Brennan et al., 2012; Hale et al., 2018; Brennan et al.,2016). They can reveal where in the brain syntactic processing occurs, but not what features of syntax are processed by different brain regions. Here, we move beyond effort-based metrics and propose explicit features capturing the syntactic structure that is incrementally built while a sentence is being read. Using these features and functional Magnetic Resonance Imaging (fMRI) recordings of participants reading a natural text, we study the brain representation of syntax. We find that our syntactic structure-based features are better than effort-based metrics at predicting brain activity in various parts of the language system. We show evidence of the brain representation of complex syntactic information such as phrase and clause structures. We see that regions well-predicted by syntactic features are distributed in the language system and are not distinguishable from those processing semantics. Our results call for a shift in the approach used for studying syntactic processing. | [
"neuroscience",
"fMRI",
"syntactic representations",
"graph embeddings"
] | Reject | https://openreview.net/pdf?id=A7-rYAC-np1 | https://openreview.net/forum?id=A7-rYAC-np1 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"PjGtCT9pw60",
"Rqe27Wm3vJ",
"D_fYEkkIiB_",
"-cD8-fo_h0j",
"IjdDUhCxri",
"VmaNGZ_OdXw",
"uRxVtsLBW2J",
"-Dyk6cI2Tmz",
"IN32cpE-C20",
"Su3E5FfxvxT",
"tIrB4E8RZib",
"JGvtM46NG9z",
"ih_E0CTCQyK",
"ACew4UjMjqs",
"_PNiHjH12Ne",
"1obJkmhgkOV"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1613450431978,
1610040496837,
1606274529906,
1606252965037,
1605603502098,
1605523057114,
1605489224784,
1605487922935,
1605487304416,
1605487015126,
1605486587459,
1605486330807,
1603924808458,
1603913609601,
1603894737114,
1603282506365
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Paper3702/Authors"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3702/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3702/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3702/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3702/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3702/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3702/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3702/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3702/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3702/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3702/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3702/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3702/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3702/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3702/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Response to meta-review\", \"comment\": \"Thank you for your meta-review. We would just like to clarify that we only said that there are some papers which question the level of importance of syntactic composition for the brain, including Pylkk\\u00e4nen (2020). Furthermore, this was a rather secondary point in our paper and responses. The main argument is about the importance of explicitly representing syntax and the fact that semantics and syntax are represented in the same regions. Also, we use the papers by Fedorenko and colleagues to in fact support the hypothesis that syntactic representations can be seen in brain data and that the regions which process them are not very distinguishable from those that process semantics. We are not sure how this came across as meaning that these papers support the theory that there are no syntax representations in the brain.\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper explores the brain's activity in response to language, specifically targeting the signatures of syntax in the brain. The authors specifically investigate the signatures of specific syntactic elements against the \\\"typical\\\" effort based syntax measures from some previous work.\\n\\nThe title and abstract of the paper are clear and compelling, but the text of the paper muddies the message and this was expressed in the reviews. There may be some debate in the literature as to if syntax and semantics are dissociable, and to what degree we can actually measure syntax in the brain, but I (and your reviewers) have trouble believing that any one actually thinks there is *no* syntax representations in the brain. Certainly this is not a claim made by either the Federenko or Pylkkanen papers the authors cite. Federenko says \\\"lexico-semantic and syntactic processing are deeply inter-connected and perhaps not separable\\\" but doesn't claim that the brain doesn't \\\"do\\\" syntax. Pylkkanen says \\\"Syntax in the brain is necessary to explain the fact that humans are exquisitely skilled at judging syntactic well-formedness, even for sentences that have no coherent meaning.\\\"\\n\\nI suggest this paper either rephrase the arguments, more clearly articulate the issues they wish to address, or find another venue where the reviewers might be more read to debate *if* syntax is encoded in the brain. That seems outside of the scope of ICLR.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thank you very much for increasing the score! We would like to point out that we do test three different types of graph embeddings with each trying to answer a different question. The ConTreGE Comp vectors allow us to investigate if the brain is concentrating on local syntactic information; the ConTreGE vectors indicate whether the brain anticipates future syntactic structure and whether it processes phrase-level syntactic structure; and finally, the InConTreGE vectors can tell us whether the brain could be computing several possible top down partial parses that can derive the words seen thus far. These specific questions can indeed be investigated using our vectors. The fact that most of our features are predictive across the language network might make a reader question the importance of our embeddings since specific regions of the brain cannot be associated with different syntactic processes. However, if it is true that the entire network processes syntax as recent research suggests (see Fedorenko et al. (2020)), these findings might indeed be representative of how the brain truly processes syntax.\\n\\nMoreover, our method of encoding constituency trees can be used to study other syntactic hypotheses as well. For example, one could use them to encode trees generated using different parsers to determine which parser produces trees that are closest to the brain\\u2019s representations. They could also be used to examine the \\u201crange\\u201d of the syntactic information which is represented in the brain. For example, we have tried varying the height of the subtrees (by truncating them) to see if a certain height leads to better predictions (these results were not included for the sake of brevity). This could inform us about how the brain processes very complicated sentence structures and if it only represents more recent syntactic information in such cases. These types of hypotheses that rely on varying parse trees cannot be effectively studied using the current set of effort-based metrics but they can be studied by building graph embeddings using our techniques.\\n\\nFinally, it must be noted that computational techniques can be used to mainly generate hypotheses or give some amount of initial validation. Only controlled experiments with carefully designed stimuli could possibly be used to strongly support/confirm hypotheses about the brain although some such studies have also shown variable results. In this context, we believe that our paper can definitely aid in hypothesis generation and provide initial validation of a hypothesis about syntactic processing. This can help researchers weed out hypotheses that are very likely to be false and concentrate on confirming those that are likely to be true.\"}",
"{\"title\": \"Reply to authors\", \"comment\": \"Thank you to the authors for your detailed replies. I want my score to acknowledge that it is certainly noteworthy that the proposed representations, as well as the BERT representations, are more predictive of brain activity than are effort-based metrics. This is surely meaningful, and surely points us toward some interesting insights about what information is reflected in the relevant brain activity. I am going to increase my score accordingly.\\n\\nHowever, I remain hesitant because although the better predictive power surely means something, I still don't feel the paper in its present form gives us a clear/confident answer about *what* the findings mean about the brain's representation of syntax -- again, in large part because of the fair amount of opacity in both the graph embeddings and the BERT embeddings. Since the aim of this line of work is precisely to shed light on how the brain works, this seems an important shortcoming, and one that I would love to see addressed more substantively to improve the paper's ultimate impact. Along the same line, the authors' argument about taking a representation-based approach to studying syntax in the brain is an interesting one, but I think that it will be easier to sell the value of this approach if you can give a clear illustration of the precise insights that this approach affords beyond existing methods. For this reason I am setting my score at 5.\"}",
"{\"title\": \"Corrected\", \"comment\": \"Thank you for pointing that out! We have fixed the formatting in the manuscript.\"}",
"{\"title\": \"LaTeX\", \"comment\": \"Thanks for the reply. I have nothing to add.\", \"here_is_an_example_for_the_mis_used_command_you_asked_for\": \"> (ex. Brennan et al. (2016); Henderson et al. (2016); Frank et al. (2015); Boston et al. (2008); Willems\\net al. (2015))\", \"ideally_you_want_something_more_like\": \"> (ex. Brennan et al. 2016; Henderson et al. 2016; Frank et al. 2015; Boston et al. 2008; Willems\\net al. 2015) \\n\\nEasily doable if you look up what natbib (if you are using that \\u2014\\u00a0mutatis mutandis for raw bibtex, etc.) is needed, e.g.: https://gking.harvard.edu/files/natnotes2.pdf\"}",
"{\"title\": \"Reply to Reviewer 3 continued\", \"comment\": \"Here is a detailed explanation of the training and testing setup. Our split and methods are consistent with other work in building encoding models for naturalistic continuous stimuli (see Wehbe et al. (2014)).\\n- Due to the resolution of fMRI, brain images are acquired at a rate of 2s per image, while each word is read for 0.5 seconds. \\n-The text was presented to the subjects in 1291 TRs.\\n- In each TR (1 TR = 2s), 4 tokens were presented one by one (each token was presented for 0.5s). \\n-All of our features are word-level features. Thus, each token has a feature vector associated with it. Let the dimensionality of the feature vector be $d$.\\n-The brain activations are measured at the level of TRs. Thus we have 1291 activation values (denoted by $Y = [y_1, y_2, \\u2026, y_{1291}]$ in the paper).\\n-Now, since 4 words were presented in a TR, we have 4 feature vectors associated with each TR. We reduce these 4 feature vectors into one vector by just summing them. This becomes the $d$ dimensional aggregated feature vector $x_i$ that is associated with $y_i$. fMRI is recording slowly varying activity that is much slower than the pace the words are presented at. It is thus not possible to distinguish the activity related to individual words, and our analyses necessarily have to be at the aggregate word level (again this is consistent with other naturalistic imaging work).\\n-Thus, we now have a set of input vectors $X = [x_1, x_2, \\u2026, x_{1291}]$ and a set of output vectors $Y = [y_1, y_2, \\u2026, y_{1291}]$ and we want to analyse if we can accurately predict $y_i$ using $x_i$. However, because of the lag in the fMRI response, we instead try to predict each $y_i$ using $x_{i-1}, x_{i-2}, x_{i-3}$ and $x_{i-4}$. This is a common approach as mentioned in the manuscript. Thus, the final set of input vectors used to fit the models is $X_{lag} = [xWithLags_1, xWithLags_2, \\u2026 , xWithLags_{1291}]$ where $xWithLags_i = [x_{i-1}, x_{i-2}, x_{i-3}, x_{i-4}]$. The first few $xWithLags_i$ do not have defined $x_{i-1}, x_{i-2}, x_{i-3}, x_{i-4}$ and we just use zero-padding in these cases. \\n- We use the $R^2$ metric to measure the correctness of these predictions. This metric is computed as follows:\\n - The $X_{lag}$ and $Y$ sets are broken up into 4 contiguous and equal-sized folds (only the last fold has 1 less TR than the other folds). Let\\u2019s call them $X1, X2, X3, X4$ and $Y1, Y2, Y3, Y4$. Since they are contiguous, $X1 = [xWithLags_1, xWithLags_2, \\u2026, xWithLags_{323}]$, $X2 = [xWithLags_{324}, xWithLags_{325}, ... , xWithLags_{646}]$, $X3 = [xWithLags_{647}, xWithLags_{648}, ... , xWithLags_{969}]$, $X4 = [xWithLags_{970}, xWithLags_{971}, \\u2026, xWithLags_{1291}]$ and similarly for $Y1, Y2, Y3, Y4$.\\n - Then four separate models are built, each trained using 3 folds of data and evaluated on the remaining fourth fold. For example, the first model is trained using $X2, X3, X4$ concatenated and $Y2, Y3, Y4$ concatenated. It is then asked to make predictions with $X1$ being the input. Let us call these predictions $Z1$. The second model is then trained using $X1, X3, X4$ and $Y1, Y3, Y4$ and asked to make predictions with $X2$ being the input to get $Z2$. Similarly, we get $Z3$ and $Z4$.\\n - Finally, $Z1, Z2, Z3$ and $Z4$ are concatenated to obtain $Z$. The set of predictions $Z$ has length 1291 i.e. it contains predictions for the entire experiment. This final set of predictions $Z$ is compared to $Y$ to obtain the $R^2$ score.\\n - This setup does not lead to any intentional data leaks. We were not sure about what you meant by there being a leak. However, while pondering over this question, we did realize that there might be some sort of unintentional leak because of the lag in the HRF, leading to some close $y_i$ being very similar to each other. Thus, we obtained a new set of results by removing data from the 5 TRs which precede and follow the test fold from the training set of folds for each model. For example, this means that the second model is trained using $X1\\u2019, X3\\u2019, X4$ concatenated where $X1\\u2019 = [xWithLags_1, xWithLags_2, \\u2026 , xWithLags_{318}]$ and $X3\\u2019 = [xWithLags_{652}, xWithLags_{653}, \\u2026, xWithLags_{969}]$ and similarly $Y1\\u2019, Y3\\u2019$ and $Y4$. This removal of data leads there being no leaks even due to the lag in the HRF. Our results do not change much even after performing this correction as is evident in the revised manuscript.\\n\\nThe original dimensionality of BERT embeddings is 1024, which leads to a 4096-dimensional input vector as explained above after considering the delays and we have less than 1000 time points in the training set. We therefore reduced the dimensionality to avoid overfitting and chose 15 because it is the dimensionality of ConTreGE.\"}",
"{\"title\": \"Reply to Reviewer 3\", \"comment\": \"Thank you for your detailed review and for the suggestion to milden the statements. We followed this suggestion in the updated version of the manuscript by softening our contribution and expanding on some of them to give more details and we hope that these changes help address the other concerns of the reviewer.\\n\\nWe think that the statement of \\u201cIt is known that syntactic processing and semantic processing are mixed and that several brain regions contribute to constitute an understanding of language (see e.g. 1,2)\\u201d is too strong. In fact, the Blank et al. (2016) paper which we cite in our manuscript represents a new way of thinking about the language system that is not in agreement with many existing theories that posit that syntax and semantics are processed in different locations (see Fedorenko et al. (2020) for a recent discussion). We believe that one of the contributions of our paper is to show that this result generalizes (and therefore add confidence in it) by showing it holds in a new experiment and through an entirely different paradigm (building an encoding model with syntactic features instead of looking at activity across conditions). In science, we can only start trusting a finding once we see it being replicated and generalized to new conditions and tasks, making replications and generalizations valuable. What is also novel in our manuscript is the use of explicit syntactic feature spaces which go beyond univariate effort-based metrics such as node count, surprisal, information gain etc, and explicitly represent the syntactic information. This has not yet been done in the field of syntax processing during language processing (even though people have been using *semantic* feature spaces to encode the meaning of language for many years). \\n\\nThank you for the question about our multiple comparison correction. We were correcting at the brain level (each participant has around 30000 voxels). We agree with the reviewer that we need to also account for the multiple feature spaces used, and we repeated the analysis by grouping all the voxel level results (for all feature spaces and for all subjects) and then doing FDR correction once. This results in one threshold for all the results, which leads to small variations in the results, but the general pattern remains the same. We have updated the plots in the manuscript.\\n\\nWe did not intend to say something that suggests that controlled experiments were less useful, in fact, we think that both controlled and naturalistic experiments are complimentary in the scientific endeavor. We computationally controlled for the effects of word frequency and word length by explicitly including these measures in the early groups of feature spaces. Then, if the additional feature spaces such as ConTreGE still predict brain activity even after controlling for effects such as word frequency, the assumption is that these additional feature spaces must contain other types of information that are predictive. We agree with the reviewer that there is perhaps some correlation that cannot be removed, and we have included this limitation in the updated manuscript.\"}",
"{\"title\": \"Reply to Reviewer 4\", \"comment\": \"Thank you for your positive review. We have updated the paper to clarify that we mean how can we as scientists researching human cognition construct these embeddings in Q1, that we are contrasting these different embeddings when using them as input for encoding models for fMRI in Q2 and that we contrast our embeddings of structural syntactic information with embeddings of semantic information in Q3.\\n\\nWe understand why the reviewer is questioning the surprising nature of the result that complex syntactic structure is encoded in the brain. While this sounds intuitive, many recent studies have doubted the role of syntactic processing during comprehension. We have cited some of these works in the discussion including Pylkkanen (2020) and Gauthier and Levy (2019). Pylkkanen (2020) for example argues that there is no conclusive evidence to indicate that the brain puts a lot of weight on syntactic composition. Thank you for the suggested paper.\\n\\nWe were not sure what you meant by the double brackets, we are using the provided ICLR template. We would be happy to fix the bracketing if you could point us to an example of where this needs to be done!\"}",
"{\"title\": \"Reply to Reviewer 1\", \"comment\": \"Thank you for the review. From our understanding of this review, it seems like the need for a new way to encode constituency trees is being questioned. To answer this question, it should be noted that our method of encoding subtrees allows the embeddings to be purely syntactic and it has the ability to encode the structure of any subtree (even incomplete ones). RecursiveNNs and TreeLSTMs are architectures that can be used to generate constituency trees and/or to compute embeddings of the constituents of a sentence. However, these embeddings contain semantic information. Thus, such architectures cannot be used for our use case since we are looking to compute purely syntactic representations that encode the structures of the subtrees. Distributed Tree Kernels (DTKs) can certainly be used to obtain such representations of subtrees of constituency trees. However, using them did not yield good results when compared to ConTreGE.\\n\\nThe result is here (anonymous link) - https://drive.google.com/file/d/1s_7Upt_B6svVPYSZJtAA-0pxTr7tCkgn/view?usp=sharing\\n\\nTo get this result, we embedded the incomplete subtrees (used to construct ConTreGE vectors) by employing the embedding methods used in DTKs. 8192 dimensional embeddings were computed (same as in the original DTK paper) using fast shuffled convolutions and the lambda parameter was set to 0.4 (this value of lambda was used since the original DTK paper showed that it produces some of the best results). Then, PCA was used to reduce the dimensionality of these embeddings to 178 dimensions (retains 80% of the variance). Finally, we tested if the inclusion of this feature led to a significant improvement in the $R^2$ values after controlling for punctuation, the effort-based metrics and POS and DEP tags (similar comparison as in figure 3 (i) but we use the new DTK-based vectors instead of ConTreGE here).\\n\\nAlso, we compare our new syntactic embeddings with other established syntactic features rather than other embedding techniques because we believe that this question is more important. We do not argue that our embedding technique is perfect but we do believe that it can be an important asset to studying syntactic processing and the paradigm itself is extensible and novel, lending itself to usage in future studies.\\n\\nNow, the second question being asked is why the proposed model should be closer to the brain activity. We have tried to answer this question by showing that the ConTreGE vectors encode higher level syntactic information that is not encoded by the other syntactic features. Since this information might be processed by the brain too, this might explain why these embeddings are better at predicting brain activity. \\n\\nIt must be noted that, to the best of our knowledge, no other work has explored building comprehensive, purely syntactic feature spaces to study the brain. One of our motives is to start this line of research.\"}",
"{\"title\": \"Reply to Reviewer 2\", \"comment\": \"We thank the reviewer for their detailed review. Regarding the contributions of the paper, we would like to point out one contribution that appears to be understated - the new methodological approach to study syntax in the brain. Even though many researchers have investigated the brain basis of syntax, no one yet to our knowledge has explicitly used syntactic representations. We consider this work to be a way to introduce other neuroscientists to this approach in the hope that they will partake in creating new syntactic feature spaces to test their syntactic hypotheses. Studying syntax in fMRI and with natural language is a complicated experimental endeavor with many limitations and challenges which might not be obvious to the reader. Through this work, we are able to propose a new way to study syntax, and leave the door open for other neuroscientists to propose other feature spaces encoding structural information that is constructed based on other hypotheses about syntax.\\n\\nWhile it is true that effort-based metrics are easy to interpret, we have shown that they do not predict much brain activity after controlling for sentence boundaries, and that explicit feature spaces that encode syntactic information are more predictive of brain activity. Thus, it does not seem justified to discard more predictive yet possibly less interpretable representations in favour of these existing metrics. This would be akin to not using deep neural networks just because their workings are not entirely interpretable. Even given that concession, our feature spaces are not all non-interpretable: the part of speech and dependency role features are by definition interpretable, and the parse-tree embeddings are effectively a transformation of the structure of the constituency tree of a sentence, which is a very well established theoretical linguistic construct. Moreover, we try our best to deconstruct the graph embeddings we propose using the ancestor prediction analysis. Encoding objects and structures such as trees is a difficult and open problem in machine learning and AI. As methods progress, we might one day have more expressive or more easy to interpret graph embeddings of parse trees. Our framework is extensible and allows for future analyses to use different embeddings.\\n\\nWe thank the reviewer for the discussion of the novelty in this paper. We believe the following are two valuable contributions of this paper: (a) it proposes a new methodology for computational cognitive neuroscience with a new way to look at an important question and (b) it confirms some of the findings in the field (which are more controversial than stated by the reviewer) in a new experiment and using another paradigm. We believe both of those contributions can contribute to the progress and reproducibility of the cognitive neuroscience of language.\"}",
"{\"title\": \"General reply to reviews\", \"comment\": \"We thank all the reviewers for their comments and suggestions for the manuscript. We have run a few additional analyses to address some of the concerns of the reviewers and we show that they have not changed the conclusions of the paper.\\n\\nOne major theme across the reviews concerns the contributions of this paper. This paper calls for the use of syntactic structure embeddings to study syntax in the brain, and thus draws new connections between AI and neuroscience in the area of syntactic processing, in a way that did not exist before. Indeed, neuroscientists have not to our knowledge studied the representation of syntax. Instead, they typically look for areas that are correlated to the effort related to processing syntax. This paper therefore is a new methodological contribution that calls for a shift in perspective in the cognitive neuroscience of syntax. It shows that AI can directly contribute to the neuroscience of language, in the form of encoding hypotheses about syntactic processing in embeddings that explicitly capture syntactic information (and not just univariate effort based metrics). \\n\\nMultiple reviewers were concerned that the results about the distributed representation of syntax in the same regions where semantics is processed were not \\u201cnew\\u201d. There are multiple competing theories about how syntax is represented in the brain in the literature, with one group considering that syntax and semantics are represented separately, and the other considering that they are represented together. Both groups have empirical evidence supporting their conclusions. How can the scientific community decide between competing theories? By designing new experiments that test these theories in different settings and see what conclusions these additional studies point to (when considered together as a body of work and not at the individual study level). Our current paper fits in this philosophical view of science and offers a generalization of the theories of the second group (semantics and syntax are processed jointly) in a new experiment, that is naturalistic (and therefore closer to how the brain processes information in real life than controlled conditions) and using an entirely different method that relies on a computational model (explicitly encoding the syntactic information in syntactic embeddings).\"}",
"{\"title\": \"Interesting work, but conclusions yield minor impact / may not follow from results\", \"review\": \"This paper derives various types of graph embeddings to encode aspects of syntactic information that the brain may be processing during real-time sentence comprehension. These embeddings, along with indicators of punctuation, POS and dependency tags, and BERT embeddings, are used to predict brain activity recorded via fMRI. The authors argue that this is an improvement over use of effort-based metrics to predict brain activity, as these embeddings contain richer information than is captured by distilling down to a single measure of effort. They show that various brain regions are significantly better predicted by the syntactic embeddings than by the effort-based metrics and POS+dependency indicators. BERT embeddings, however, prove to be a better predictor (than syntactic and other predictors) across much more substantial areas of activity.\\n\\nI'm all for leveraging representations from NLP models to ask questions about the brain, and some of the patterns identified here are interesting, but I don't think this paper has arrived at sufficiently impactful takeaways to merit publication as yet. The main concrete conclusion drawn from these analyses is that complex syntactic information is encoded in the brain. But this really isn't a particularly disputed claim, so it's definitely underwhelming as a takeaway. The sensitivity of the brain to hierarchical syntax is also an underlying assumption of syntactically-grounded effort-based metrics, so although these graph embeddings capture richer information, if they only give us the conclusion that the brain is doing syntax, then they have not really given us new information. A related conclusion made in the paper is that syntactic information is represented in a distributed fashion, but the effort-based metrics seem to suggest a similar conclusion (if I'm correctly interpreting Fig 3f), so again this is not unique to the proposed representations. I do recognize that the proposed syntactic representations are predictive of activity over and above the effort-based metrics in some voxels, but it's not clear exactly what we learn from this fact.\\n\\nA secondary conclusion made in the paper is that regions that process syntax are not specialized for syntax. This conclusion seems to be made based on the fact that BERT embeddings are stronger predictors than the syntactic embeddings in many of the regions in which the syntactic embeddings outperformed other predictors. However, as the authors acknowledge, BERT embeddings also encode syntactic information, and this makes it more difficult to interpret this pattern of results. It seems that the stronger performance of BERT embeddings could just as easily be attributable to better/richer encoding of relevant syntactic information as to encoding of semantic information. What the results seem to show is simply that the BERT embeddings are better predictors of brain activity than any of the other representations used, so the question then raised is what exactly the BERT embeddings capture that the brain activity is also sensitive to. \\n\\nI'll also say that a downside of the graph embeddings is that they seemingly reduce transparency relative to the effort-based metrics -- I'm certainly willing to believe that they encode richer information, but it doesn't seem clear precisely what information they are adding.\\n\\nAll in all, I think this is an interesting line of work, but I'm not convinced that we come away having learned something impactful, both because some of the proposed conclusions answer questions that aren't really at-issue / aren't really uniquely addressed by the proposed representations, and also because some conclusions are drawn that don't seem to follow clearly from the observed results.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review\", \"review\": \"This paper aims to propose a parse tree embedding that correlates with brain activations better than existing measures on sentences. It is an extremely important topic as it can draw the link between Artificial NN and real NN on the problem of syntactic processing.\\nHowever, the paper leaves with a major quesiton: why? \\nWhy the proposed parse tree embedding model is a good model. There is a wide range of models embedding parse trees, e.g. RecursiveNN, TreeLSTM and Distributed Tree Kernel. All these models are embedding trees in different ways. Why the proposed model should be closer to the brain activity? This should be definetly clarified in the description of the parse tree embedder.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Looking for syntax correlates in fMRI data\", \"review\": \"This paper presents a neuroimaging study investigating the way syntax is represented. The authors compare models that encode syntax with fMRI data. They find that syntax and semantics are computed/represented in overlapping brain regions and that \\\"complex\\\" syntactic information is decodable.\\n\\nI liked this paper and think it is valuable in, amongst other things, furthering the theoretical position that the dichotomy between semantics and syntax is a more conceptual/high-level one than can be found at the level of neuroimaging data. \\n\\nThe authors give an exposition of their research questions, however I think these can be phrased even more clearly in some cases. For example, for Q1 do they mean for humans in general (as in in the brain) or do they mean us as scientists researching human cognition? For Q2, do they mean they will use a model-based fMRI analysis? For Q3, is this multivariate or univariate? Just adding those short phrases or words to the research questions will help situate the reader, in my opinion.\\n\\nWhy is it in and of itself surprising that (complex) syntax is encoded in the brain? In other words: \\\"Several regions of the brain\\u2019s language system were predicted by ConTreGE, hinting that the brain does indeed encode complex syntactic information.\\\" \\u2014 why would \\\"the brain\\\" not? Please do not get me wrong, this is obviously important/required to be shown but the research herein actually has even more value (or could have) and can be framed and discussed as such. Surely, the interesting results (since we know from other sources and common sense that syntax is, has to be, encoded in the brain somewhere since it plays a role in cognition) is the actual relationships of the results to the overarching theory and should be foregrounded more. \\n\\nA potentially useful theory paper is, and which might interest the authors: https://doi.org/10.1162/jocn_a_01552\", \"minor_point\": \"the in-text citations would look better without double brackets \\u2014 which is easy to fix in LaTeX.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"Summary\\n\\nThe manuscript focuses on understanding the features of syntax that are processed by different brain regions as captured by fMRI. The proposition is to move beyond effort-based metrics to subgraph embeddings for modeling syntactic structure. In addition, the approach focuses on natural reading and incremental models while a sentence is being read. I liked reading the paper and it reports interesting results. The advantage of the research is that it uses natural reading stimuli, but that also comes with disadvantages for supporting the conclusions of the manuscript (see details below).\\n\\nGeneral comments\\n\\nThe research questions and contributions are clearly written. I think the authors could milden some of the statements that are not considered to be really novel findings of the present manuscript. \\n\\nIt is known that syntactic processing and semantic processing are mixed and that several brain regions contribute to constitute an understanding of language (see e.g. 1,2). It is also known that the brain not only processes structure seen thus far but also predicts future structure. However, this, and other measures, such as syntactic violations, information gain, etc. has been found already in the early studies (see 3,4,5) and more recently in natural reading (see 6). Maybe the authors could be a bit more specific in what exactly is novel in the present manuscript. \\n\\nMethods\\n\\nI did not fully understand the data split for training and testing. It states: \\u201cis first split into 4 contiguous and equal\\nsized folds. Each model uses three folds of the data for training and one fold for evaluation.\\u201d Now, if the data is from natural reading of text, does this mean that you are using samples that occur both before and after a particular word is being read? If so, does this affect the results as evidence from other (latter) parts of the sentences can be used to draw predictions for words earlier in the sentence? Prior to this, it is stated that you only use x-1,\\u2026x-4 for prediction and that \\u201c(to form a prediction for the entire experiment in which each time point\\nis predicted from a model trained without the data for that time point)\\u201d. But is there now a chance that something in x+n would be in the training data for the models? Wouldn\\u2019t this mean that some data would \\u201cleak\\u201d in x+n and would be in the training set for a particular sample? Maybe I didn\\u2019t correctly understand the setting, but I would like to see a better explanation of how exactly the split is done and test and training sets are constructed, and why this is the correct way for the particular analysis conducted. \\n\\n\\nThere are many models tested and I hope the authors are correcting for multiple testing; it is stated that Benjamni-Hochberg False Discovery Rate correction is applied, but it is not entirely clear how this is done. \\n\\nI did not understand why principal component analysis (PCA) was used to reduce the dimensionality to 15. Why would the dimensionality need to be the same? How does this affect the results?\\n\\nConclusions\\n\\nThe conclusions of the paper rely on the advantages of naturalistic sentence reading, but the claims that more controlled experiments that have revealed specific regions responsible for syntactic processing would be less important, are not supported by the results presented in the manuscript. There was nothing in the study setting that would have controlled for other effects. For example, what if there were some other factors in the sentences that explain the effects. What if the other model predicts some other features that are correlated with syntax structure, such as more complex words, words that carry more information, more frequent, rare, short, or long words? I think these cannot be fully excluded and I would have liked to see some other results than only the predictions of the models to confirm that the prediction study can be considered valid. Without a proper control condition or other analysis of the data, it is not evident that all of the conclusions hold.\\n\\n[1] https://www.sciencedirect.com/science/article/pii/S1053811915011064?casa_token=uVeVeMM5HCwAAAAA:Uj7IsuC-Kf6xwHQ-Rgs8RhxTl8A_PID_2fVStTqnuTA8Lshjb8Lil-iDOhyHJgADMGhHtjDM8kE\\n\\n[2] https://www.nature.com/articles/s41467-019-08848-0\\n\\n[3] https://www.sciencedirect.com/science/article/pii/S1053811916001592?casa_token=neHunii1ASwAAAAA:P1lkCeo5kDULBuy_6VXAjFN2gx2xK-algZXjXWaRgNxpnhGnWA3y_Vh_ey4TvynL9-CZOHHrkI0\\n\\n[4] https://www.sciencedirect.com/science/article/pii/S0093934X14001515\\n\\n[5] https://www.mitpressjournals.org/doi/abs/10.1162/089892903322370807?casa_token=SqKOci8rCScAAAAA:rvso0ntTEMzNym_FhyH-ylSilW0qRHjEiBLrFbXlV703WHLOTTZ4G3xHVHhrxRiIBXLB9PqadvV3\\n\\n[6] https://www.nature.com/articles/s41598-020-63828-5\\n\\n[7] https://www.sciencedirect.com/science/article/pii/S1053811916001592?casa_token=WtO6Twcn9yEAAAAA:6aeIl_agLqFBsaViu4Audy7KdHJKtELPp6TF2KT0stCpng80APTfD7V1rZu0gpzqDqFp8ZKHy2E\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
9UFIOHeVEh | Identifying the Sources of Uncertainty in Object Classification | [
"Luis Armando Pérez Rey",
"Berk İşler",
"Mike Holenderski",
"Dmitri Jarnikov"
] | In image-based object classification, the visual appearance of objects determines which class they are assigned to. External variables that are independent of the object, such as the perspective or the lighting conditions, can modify the object's appearance resulting in ambiguous images that lead to misclassifications. Previous work has proposed methods for estimating the uncertainty of predictions and measure their confidence. However, such methods do not indicate which variables are the potential sources that cause uncertainty. In this paper, we propose a method for image-based object classification that uses disentangled representations to indicate which are the external variables that contribute the most to the uncertainty of the predictions. This information can be used to identify the external variables that should be modified to decrease the uncertainty and improve the classification. | [
"Classification",
"Interpretability",
"Disentangled Representations",
"Uncertainty Estimation"
] | Reject | https://openreview.net/pdf?id=9UFIOHeVEh | https://openreview.net/forum?id=9UFIOHeVEh | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"IAobiE5d8R",
"suMhjT8AxAl",
"bSGSOVecCX2",
"qDBDFmOEIMc"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040512387,
1603882415222,
1603838874535,
1603390904240
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3701/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3701/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3701/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The manuscript presents an approach for identifying sources of uncertainty in object classification tasks by disentangling representations in latent spaces.\\n\\nThree reviewers agreed that the manuscript is not ready for publication. \\nSome of the concerns are conceptual flaws, weak evaluation protocol, and an incorrect interpretation of experiment results.\\n\\nThere is no author response.\"}",
"{\"title\": \"The method suggested does not apply to real world problems, and the findings presented are rather limited\", \"review\": \"\", \"major_comments\": \"-\\tThe methods suggested does not apply to real-world classifiers:\\no\\tIt required training with strong labels only available in simulation\\no\\tClassification is done based on a low-dimensional intermediate representation of the \\u2018entangled space\\u2019, and its accuracy is probably inferior to what can be obtained with general CNNs (no comparison was made)\\no\\tI do not see how this can be applied to real-world competitive classifiers\\n-\\tThe findings are rather limited\\no\\t The method suggested for finding which extrinsic variable is able to reduce the prediction entropy is a) not well founded, and b) was not tested experimentally \\n-\\tThere are presentation clarity problems, with non-sentences, broken references, etc..\", \"more_detailed_comments\": [\"\\u201cSeparate the identity of cars from their pose (Yang et al., 2015).\\u201d \\u2013 this is not a sentence.\", \"The probability estimates provided by neural networks are uncalibrated (not reflecting the real error probability)\", \"o\\tThis is not mentioned in the introduction, but is discussed in page 7\", \"Page 4 \\u201cmaximizing the lower bound of the latent variable model distribution\\u201d \\u2013 what is a lower bound over a distribution? Lower bounds are defined for scalars, not distributions. This sentence is not clear.\", \"Page 5: before equation7 the text reads \\u201cit is possible to find the latent variable that would decrease the uncertainty\\u2026\\u201d. However, the equation does not define a latent variable, not does the text in the 1-2 sentences following it. The equation defines a vector in R^M. It is not stated how it defines a latent variable\", \"Page 5: The process described in equations 8,9 is essentially making a gradient step (of the entropy) in each of the M_E possible directions and then measures which step reduced the Entropy. This is a numerical re-estimation of the gradient. Why is it done? At least for small \\\\alpha, the direction most minimizing the entropy can be determined from the gradient directly (the coordinate in which the (negative of ) gradient is highest)\", \"Page 7: The results are shown in Table 4.2, = 4.2 is a broken reference\"], \"oalso\": [\"it is not clear how mutual information is calculated. It requires discrete variables, and the quantization details may be important.\", \"The classifier trained has a bottleneck at the \\u2018entangled representation\\u2019 layer, and is hence probably is far from being optimal in terms of its obtained error\", \"The method suggested for estimating which extrinsic variables most affect the uncertainty was not applied experimentally (no results were reported)\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Addressing an important problem but lacks sufficient novelty and experimental validation\", \"review\": \"This paper proposes to identify sources of uncertainty by disentangling representations in latent spaces for object classification tasks. Experiments on a synthetic dataset demonstrate that the proposed method can disentangle different extrinsic variables' contribution to the prediction uncertainty. In addition, the authors propose to modify the latent variables to decrease uncertainty in the predictions.\\n\\n### Clarity\\n#### Pros\\n- This paper provides a clear discussion of the main contributions, assumptions, experimental settings. \\n\\n#### Cons\\n- The experiment section needs more analysis. For example, how to explain the results in Table 1, which variable contributes most to the uncertainty and which contributes least?\\n- The paper contains typos and grammatical errors and need to be proofread carefully.\\n- Several questions needed to be addressed.\", \"questions\": \"1. How to choose the latent representations? In real applications, there are usually more factors.\\n2. It is not clear how the model in Figure 2 is trained. e.g. what is the loss function, do you need supervision on the latent variable.\\n\\n### Originality\\n#### Pros\\n- The paper proposes a method to identify sources of uncertainty by disentangling representations in latent spaces in object classification tasks.\\n\\n#### Cons \\nThe contribution is unclear as the disentangling method is based on the DC-IGN method. Moreover, it is unclear how to relate the imaging factors to the latent variables as multiple imaging factors can impact the same latent variable. \\n\\n### Significance\\n#### Pros\\n- It addresses an important problem; the problem of decomposition of the sources of uncertainty in model prediction is important and has not been sufficiently addressed in existing works.\\n\\n#### Cons\\n- Entropy of the softmax is not a reliable criterion to capture uncertainty and, more importantly, other factors besides the imaging factors such as the input data density can also contribute to the uncertainty of the prediction. \\n- The experiment is performed on synthetic data, which leads to limited evaluation conditions. As there is a distribution difference between synthetic data and real data, it would be more significant if the proposed method can be applied to real images.\\n- The latent variables in the study are limited, only including light intensity, pose, color, etc. In real applications, there are usually many more factors such as background, occlusion, noise, resolution, etc. It would be better if the authors can study more factors or explain why specific factors are chosen.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Official Review (Reviewer3)\", \"review\": \"I have very mixed feelings about the work. On one hand, the problem is interesting (perhaps novel) and deserves the attention of the community. On the other hand, the paper is half-baked, there are conceptual flaws, the evaluation protocol is weak, there is an incorrect interpretation of experiment results;\", \"my_comment_on_novelty_is_the_following\": \"I would say that algorithmic novelty, is the weak point, however, bringing the attention of the community to important problems is not less important than new algorithms (and even is more important IMO). Taking into account that \\\"novelty\\\" is extremely subjective, I would say that there is nothing wrong with the novelty side.\", \"writing\": \"the paper is clearly written.\", \"concerns\": \"1) The identified factors do not proofed to be meaningful and useful. Pure identifying factors of uncertainty or reduction uncertainty of a model do not give us the full picture. The factors may be flawed in many ways: factors may be too dependent on random errors of the model but not to be semantically meaningful. Reduction uncertainty of a model is not always useful, as most models nowadays are often wrongly overconfident, more confidence is not necessarily good! It has not been proven that identified factors are connected to uncertainty. \\n\\n2) There is a noticeable connection between the proposed method and adversarial attacks. We can interpret a method as a way to \\\"trick\\\" a model to be more ceratin on adversarial modifications of embeddings. In this case, the results would not be useful.\\n\\n3) There are two flaws of experiments in section 4.2: i) the authors compare Expected Calibration Error between different datasets, these figures are not comparable, it is the same as compare accuracy between MNIST and ImageNet ii) ECE is a biased estimate of true calibration with a different bias for each model, so it is not a valid metric to compare even models trained on the same data (see Vaicenavicius2019). Yes, ECE is the standard in the field, but it is the wrong standard that prevents us from meaningful scientific progress, so we should stop using it.\", \"overall\": \"The direction is interesting, however, the evaluation protocol needs to be more thoughtful. At the moment, it is impossible to verify the real performance of the method.\", \"suggestions\": \"1) Evaluate the identifying algorithm on downstream problems. For example, can we use these factors to collect additional data that will improve predictive performance better than uniformly sampled data (aka active learning)? There should be many more interesting settings.\\n\\n2) Evaluate the model on controllable uncertainty estimation setting: identify setting where we understand on which examples we expect the model to be uncertain (due to not enough data in a dataset in a certain region, etc.), validate selected factors.\\n\\n3) To use the squared kernel calibration error (SKCE) proposed in [Widmann2019] along with de facto standard, but biased ECE. The SKCE is an unbiased estimate of calibration. There might be some pitfalls of this metric that I'm not aware of, but the paper looks solid and convincing. Also, please put attention to Figure 83 in the ar\\u0425iv version.\", \"editing\": \"- Citations: It is better to use \\\"authors (year)\\\" style when a citation is a part of a sentence---\\\"(Gabbay & Hoshen, 2020) proposes\\\" to \\\"Gabbay & Hoshen, (2020) propose\\\", and otherwise \\\"(authors, year)\\\" when a citation is not a part of a sentence \\\"on original VAE framework Higgins et al. (2016);\\\" to on original VAE framework (Higgins et al., 2016; .....).\\n- \\\"only only\\\" typo in 4.1\\n\\n[Vaicenavicius2019] Juozas Vaicenavicius, David Widmann, Carl Andersson, Fredrik Lindsten, Jacob Roll, and Thomas B Schon. Evaluating model calibration in classification. AISTATS, 2019.\\n\\n[Widmann2019] Widmann D, Lindsten F, Zachariah D. Calibration tests in multi-class classification: A unifying framework. In Advances in Neural Information Processing Systems 2019 (pp. 12257-12267). https://arxiv.org/pdf/1910.11385.pdf\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
9GUTgHZgKCH | Reducing the number of neurons of Deep ReLU Networks based on the current theory of Regularization | [
"Jakob Heiss",
"Alexis Stockinger",
"Josef Teichmann"
] | We introduce a new Reduction Algorithm which makes use of the properties of ReLU neurons to reduce significantly the number of neurons in a trained Deep Neural Network. This algorithm is based on the recent theory of implicit and explicit regularization in Deep ReLU Networks from (Maennel et al, 2018) and the authors.
We discuss two experiments which illustrate the efficiency of the algorithm to reduce the number of neurons significantly with provably almost no change of the learned function within the training data (and therefore almost no loss in accuracy). | [
"Reduction",
"Compression",
"Regularization",
"Theory",
"Pruning",
"Deep",
"Interpretability",
"Generalization"
] | Reject | https://openreview.net/pdf?id=9GUTgHZgKCH | https://openreview.net/forum?id=9GUTgHZgKCH | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"IpSroQfbIN",
"o21IhEgqHbT",
"-n3wWLD40m8",
"6hRBXxwJpH",
"FPiAsxyr-ld",
"9hVzFgU-ea",
"cYukXIXuRm",
"PiskuPL0h13",
"Pupl2qoZ_Pq",
"oteCBNhqIh",
"AwyoiDE5W8",
"GvVHhZXZkx2",
"Sv_CdJ-pRHz",
"nFLnVR93Bld",
"jcmv8fjSYTQ",
"3bjb6sh6Isp"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040357136,
1605888618288,
1605632258197,
1605629951365,
1605626936617,
1605626877208,
1605626513146,
1605477707756,
1605477441212,
1605281254604,
1605280968843,
1604698587764,
1604381140826,
1603948565783,
1603885117232,
1603796020057
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3700/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3700/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3700/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3700/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3700/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3700/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3700/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3700/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3700/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3700/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3700/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3700/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3700/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3700/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3700/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This is a clear reject. None of the reviewers supports publication of this work. The concerns of the reviewers are largely valid.\"}",
"{\"title\": \"Thank you for clarifications.\", \"comment\": \"I thank the authors for the clarification of the notation and correction of typos. I do now see how the equations work (mathematically), i.e., I retract my concerns on the function g and clusters containing a single neuron. The motivation for this pruning step still requires detailed explanations.\\nI am not able to present a reference that explicitly says that neurons that are always active can be combined to a linear function, but I would still call this well-known. It is implicit in any article on linear regions of ReLU networks, using that the entire network function is linear for a fixed activation pattern.\\nI further thank the authors for additional explanations of their paper, which outline the possibility of an interesting theory and contribution. The presentation, however, needs considerable improvement that allows verification. The method should be verified experimentally on more complex datasets and compared to state of the art pruning techniques (both theoretically and experimentally).\"}",
"{\"title\": \"Clearer presentation of our paper and more experiments are needed\", \"comment\": \"We agree that we should have made our explanations clearer and more explicit. We also agree that it would probably be better to first publish the other papers.\\nDo you have a reference for your point (b), where always active neurons are combined?\\nYou only mentioned step 1 (and maybe 2) of our algorithm, yet we consider the third step (section 4.3) as the most mathematically interesting.\", \"our_main_contribution\": \"We have a theory that tells us that in the case of a perfectly trained neural network there should be many neurons that can be removed without changing the learned function at all. Our second main contribution are the details of our algorithm that approximates this reduction behavior in a numerically stable way for a not perfectly trained neural network. (We agree that we were not able to communicate our theory very well.)\\nWe invite you to read our answers to the other reviewers if you are interested in understanding what the theory is about.\\nThere are indeed similarities between our architecture and ResNets, the main difference is that we can train the affine maps. We do not see our main contribution in introducing a new architecture. The general concept of a P-functional is a functional P that fulfills eqs. (1) and (2) for some architecture. We define the P-functional of our architecture in eqs. (3) and (4).\\nWe tried to make the connection between eq. (4) and our algorithm clear in the paragraph after eq. (4), but we agree that this is very hard to understand and needs a much clearer explanation.\"}",
"{\"title\": \"We agree that we need to improve the presentation of our paper and add more experiments\", \"comment\": \"We agree that more experiments are necessary, that we should compare it to the state of the art (in fact, we will shortly add plots which show that it does outperform the default pruning method as implemented by tensorflow) and that it can be hard to read. We have to rethink how we present our theoretical results to make them less misunderstandable.\\nWe still think that our results give valuable insights into the theory of pruning and provide techniques that can actually improve pruning in practice, but we agree that we were not really able to communicate our main messages in this text. Maybe our answers to the other reviewers help you to understand what we actually wanted to express.\"}",
"{\"title\": \"Small further comment\", \"comment\": \"Thank you very much for pointing out the typos!\\n@2.: in \\\"reduce the number of neurons by 90% to 99% without introducing sparsity\\\" sparsity means that the weight matrices contain a lot of zeros (potentially spreaded all over the matrix). When you remove a complete neuron the weight matrices and biases get smaller dimensions, so after removing neurons you still have a FULLY connected feed forward neural network with less neurons and therefore less parameters (the parameters get completely removed by decreasing the matrix dimensions instead of setting the parameters to zero, in other words we remove complete columns and rows of the matrices instead of individual entries). Most state of the art pruning methods are so-called weight-pruning methods that remove single weights, so they set some entries of the weight matrices to zero (a sparse matrix). They can get some benefits in memory and evaluation speed from exploiting that there are so many zeros in the matrix, but if you for example set 50% of weights to zero and use the latest technologies to store sparse matrices efficiently and to do sparse matrix multiplications efficiently you can not reduce the memory consumption to exactly 50% and especially on GPUs or TPUs the computational time and energy consumption perform worse than 50% as you can read in the paper by Gale et. al 2020 cited. Does this answer your question?\"}",
"{\"title\": \"Adressing 7 - 8\", \"comment\": \"7. Our cluster does not simply cluster neurons together which have similar values of $(v_k,b_k)$ as many standard clustering approaches do it (e.g. https://www.mdpi.com/1424-8220/20/21/6033/htm). I am not aware of any theory telling us that there is a bounded number of clusters regarding $(v_k,b_k)$. And indeed there are solutions to the optimization (that we actually observe in practice) where there are unbounded numbers of clusters of $(v_k,b_k)$. Our clustering clusters neurons together that have almost the same vector representation: $(\\\\frac{b_k v_k}{||v_k||^2},\\\\frac{v_k}{||v_k||})$.\\nWith our vector representation obviously all the neurons with equal $(v_k,b_k)$ will also be clustered together but additionally many neurons will be clustered together that have very different parameters but still can be clustered together without any change of the function if their vector representation is identical. And the theory tells us that there should be many neurons that have exactly identical vector representation. Do you see from this explanation that our vector representation is strictly superior to the vector representation $(v_k,b_k)$ for ReLU neural networks?\\nWe know from the theory that for the perfectly trained neural network that every neuron should be either zero or perfectly aligned (with respect to our vector representation) with one of the $n_j^*$ clusters. So basically we only remove numerical artefacts that wouldn\\u2019t change anything if the neural network was perfectly optimized in step 2 (sec 4.2) and 3 (sec 4.3). **We see this as our main contribution: We have a theory that tells us that in the case of a perfectly trained neural network there should be many neurons that can be removed without changing the learned function at all. Our second main contribution are the details of our algorithm that approximates this reduction behavior in a numerically stable way for a not perfectly trained neural network.** (We agree that we were not able to communicate our theory very well.)\\nIn the mean-time we found that other papers use almost equivalent vector representations for clustering, but they lack a theory that for perfect training the neurons would actually be perfectly clustered into a small number of clusters. And they have a harder time with not perfectly trained neural networks since they do not combine it with step 1 and step 2 and they use less stable formulas/algorithms.\\nIn step 1 (sec 4.1) we do not change the training loss $L$ and make extrapolation more natural. Please see our explanations of step 1, 2 and 3 in our [answer to reviewer AnonReviewer2](https://openreview.net/forum?id=9GUTgHZgKCH¬eId=AwyoiDE5W8) for more details.\\n8. Are you asking about step 1 (sec 4.1)? In theory one could implement it with a complexity (#parameters+#neurons)$\\\\cdot$#datapoints=O(#parameters$\\\\cdot$#datapoints), if you make one forward pass per datapoint where you check the sign of each neuron. The computational complexity of our implementation is #parameters$\\\\cdot$#neurons$\\\\cdot$#datapoints. Empirically we have seen that for MNIST 60 out of 60 000 data points give already an extremely good approximation and takes only seconds to compute. We could easily afford to use more than 60 data points, but we didn\\u2019t see substantial improvement of using more. We were ourselves slightly surprised that such a small number of data-points is sufficient. To some extent we could explain this phenomena on an intuitive level, but this would fill multiple pages and still not be mathematically precise. There is a small number of neurons (ca. 10 of 1000) that get removed when we only use 60 data points that would not be removed when we use all data points. But we think that these neurons are almost outside neurons. From intuition and from experiments we believe that these almost outside neurons can be replaced without problems. Maybe we should include some experiments that help justifying this approximation?\"}",
"{\"title\": \"Thank you very much for your detailed and constructive feedback! Adressing 1-6\", \"comment\": \"Yes, we should do more experiments.\\n\\n1. Yes, we have to rethink how to make this text more readable. We don\\u2019t think that remark 4.1 is the most important result of our paper, but we should formulate it as a precise theorem. The way we have formulated it now contains a little mistake/impreciseness. The precise version of it would contain that the training loss $L$ is not changed by step 1 and that the learned function of each stack does not change on the convex hull of the representation of the training data from the previous stack. \\n2. Yes, we should compare it to the standard baselines. In fact, we will shortly add plots which show that it does outperform the typical pruning method as implemented by tensorflow.\\n3. We have one network with seven different outputs that we wanted to visualize. There are some interesting phenomenons that you can only see from multiple outputs together, but since we hadn\\u2019t any space to explain it, maybe picking only a view of them would actually be better. \\n4. We think that the bottlenecks have advantages for generalization and interpretability.\\nThe theory that drives our algorithm tells us that if every second layer has a finite width $d_j$ (bottlenecks) and every second layer has a \\u201cinfinite\\u201d width $n_j$, there will be only a bounded number of clusters $n_j^*<N+2$ of neurons in the infinite wide layers, where $N$ is the number of data points. From the theory $n_j^*$ converges to 0 if lambda goes to infinity and the worst case bound ist $N+1$, but in practice, even for very low values of lambda, we observe $n_j^*$ to be much lower than the worst case $N+1$. For MNIST $n_j^*$ is approximately below 100, while $N=60 000$.\\nSo one could also apply our algorithm if every layer had the same width, but the theory gives a better motivation for alternating between wide and narrow layers.\\nAfter reduction we need the affine layers. Before reduction they are optional.\\nDoes this answer your question?\\n5. The unpublished article is not submitted to ICLR. Probably it would be better to publish the other article first. Yes we should mention that $n_j>N$ is sufficient for the architectures used in this paper (for other architectures one needs $n_j$ to infinity), where $N$ is the number of training data points. (In your notation $d$ was the number of training data points?)\\n6. On page 3, footnote 1, $P(f)$ refers to $P(f)$ from eqs. (3) and (4). I agree that we should explicitly refer to these equations in the example. In the example $P(f)=\\\\infty$ because we use the standard-definition that the infimum over an empty set is infinity. $P(f)=\\\\infty$ implies that the function $f$ cannot be represented by the neural network architecture (even in the limit neurons to infinity it can not be globally approximated by a neural network). There are also functionfs $f$ with finite $P(f)$ that cannot be exactly represented by a neural network, but can be arbitrarily well approximated with neural networks with bounded regularization cost.\\nP from eq. (3) and (4) is formulated for the architecture $\\\\text{NN}_\\\\theta$ from Fig. 4 without the skip connections, where all the parameters are regularized. It is mentioned below eq. (4) that one could easily change it for the other architectures mentioned in this paper. For this paper it is only important that all these P-functionals have only a norm and not a squared norm in the integral.\\nDoes this answer your question?\"}",
"{\"title\": \"minor update of formula (8)\", \"comment\": \"The old equation (8) for $v$ was correct, but the new one should be more stable in the case of many noisy neurons within one cluster and it might be easier to interpret (since the weighted mean is more visible).\"}",
"{\"title\": \"Thank you very much for your very detailed and constructive feedback!\", \"comment\": \"* We agree that it is hard to follow the paper without the unpublished paper.\\nThe majority of pruning algorithms do not have any theory that tells that there should be weights that can be removed without changing the learned function at all.\\nAnd most of the pruning algorithms seem to have no bound (independent of the size of the neural network) on how many parameters you maximally need to not change the learned function. In contrast for our algorithm, our theory tells us that there is a bound $n_j^*$ such that no matter how large we set the width $n_j$ of the original network, we can always reduce it to $n_j^*$ neurons without any change of the learned function, if the neural network was perfectly trained before.\\nOne can easily obtain an upper bound $n_j^*< N+2$, where $N$ is the number of data points. From the theory $n_j^*$ converges to 0 if $\\\\lambda$ goes to infinity and the worst case bound is $N+1$, but in practice, even for very low values of $\\\\lambda$, we observe $n_j^*$ to be much lower than the worst case $N+1$. For MNIST $n_j^*$ appears to be approximately below 100, while $N=60 000$.\\nWe agree that step 1 and 2 are easier to understand than step 3, but we are not aware if step 1 (for neurons which are always active) has already been done in the literature? Do you have a reference? Also for step 2 we are not sure if exactly our pruning criterion is already used in the literature (typically, only $||w||$ is used)?\\nComing from a theoretical point of view we first implemented only step 3, which is the most mathematically interesting step of our algorithm. But then we soon found out that one needs step 1 and step 2 as preprocessing due to numerical noise and the reasons explained in [our answer to AnonReviewer2](https://openreview.net/forum?id=9GUTgHZgKCH¬eId=AwyoiDE5W8).\\nWe think step 2 is the least creative step of our algorithm, but I think we are the first who give a theoretical foundation why this step is so effective. The theory tells us that each of the $n_j$ neurons, that does not belong to any of the $n_j^*$ clusters, has to converge to exactly zero. In the parameter space we have $L_2$-regularization, so it is quite unexpected that there will be individual parameters that converge exactly to zero. In the function space however eqs. (3) and (4) show that it behaves more like $L_1$, so there should be many NEURONS that are exactly zero in all their parameters (as in Lasso-regularization). When you look at eqs. (3) and (4) because the sphere under the integral is with respect to the $L_2$-norm and norm inside the integral is a $L_2$-norm, there is not really a reason why there should be individual zero weights without the complete neuron being zero.\\nThe clustering follows from eqs. (1)-(4), but we agree that it needs more explanations. There is actually a typo inside the lowest line of eq. (8): one should remove the index k, so one obtains: $b=-\\\\xi ||v||$\\nIs it clear from the context that $\\\\sum_k$ is always the sum over all Neurons within one cluster? Is it clear from the context, what we mean with $b$, $v$ and $w$? Here $b$ is not the vector of all the $b_k$ but $b$ is the bias of the new neuron that should replace all the neurons within this cluster. We don\\u2019t understand your concern about $g$? We only plug in $\\\\xi$ into $g$ and $\\\\xi$ is a scalar, since every $b_k$ is a scalar. We omit the index $j$ for the layer.\\nWe agree that we use a slightly inconsistent notation for the weights and biases. We definitely have to reconsider our notation.\\nThe square roots of square roots are not a typo.\\nFor a single neuron the function doesn\\u2019t change (Note that the parameters of the neuron can change a lot during clustering but the contribution to the learned function does not change since $\\\\text{ReLU}(cx) = c\\\\text{ReLU}(x)$). Also if you have multiple neurons that all have the same vector representation $(\\\\frac{b_k v_k}{||v_k||^2},\\\\frac{v_k}{||v_k||})$, clustering them together doesn\\u2019t change the function. There would be an easier formula which also fulfills this, but the formulas (6) to (8) get a bit more complicated to deal better with not perfectly clustered neurons and to guarantee that the regularization cost is lower or equal after the clustering than before.\\n* We will soon upload experiments where we outperform the default pruning of tensorflow. We agree that we should do further experiments.\\n* We agree that the presentation needs a lot of improvement. The linear layers can already be added and trained during training, as we did in our experiments. This is optional (you could also train it without the skip connections). After training we have to update/add the skip connections during step 1. In eq. (3) and (4) we formulated P for the case without skip connections since it would be trivial to modify it when the skip connections are introduced.\\nWe definitely have to explain and derive the algorithm in more detail.\\nThank you for pointing out the typos! We have already uploaded the fixed version.\"}",
"{\"title\": \"Adressing the raised bulletpoint-list\", \"comment\": [\"We agree that it would probably make more sense to publish the other article first, so that we can reference it to make the theoretical foundation of our algorithm more clear.\", \"Highway networks are a very interesting branch of research, but the difference is that the carry gate only propagates a scaled identity map, whereas our architecture can learn any arbitrary affine map from one bottleneck layer to the next. In future work, it would indeed be interesting to study the common properties between the two architectures.\", \"Yes we agree, that more experiments should be made. (Originally we thought of this paper to be mainly a theoretical paper. We think that this theory could be applied to better understand the lottery ticket hypothesis. But we agree that it would also be interesting to compare the performance of our algorithm to other state of the art algorithms as we think that our algorithm has some significant advantages.) In fact, we will shortly add plots which show that it does outperform the default pruning method implemented by tensorflow.\", \"Thank you very much for pointing out the typos! We have fixed them in LaTeX and will upload a corrected version immediately.\"]}",
"{\"title\": \"Thank you very much for your very detailed and constructive feedback!\", \"comment\": \"Especially the sentence in your summary \\u201cThe main idea of the proposed algorithm is basically to prune those neurons whose removal will not change the function a lot. To quantify this, the authors turn to the L2 norm of each neuron.\\u201d helps us a lot to see that what readers extract from this paper is very far from what we actually wanted to express in it.\\nWe should emphasize more clearly that the main contributions of our algorithm are step 1 and 3 (these steps do not prune neurons based on their L2 norm). And we definitely have to better explain the theory our algorithm is based on. The main idea of the algorithm is: The theory would tell us that if the training algorithm converged there should only be a small number of clusters of neurons (one could give some theoretical bounds for this small numbers of clusters, but our experiments show that this number is in practice many orders of magnitude smaller than the known theoretical bounds). If the training algorithm would converge perfectly each neuron would fall into one of the 3 categories:\\n\\n1. it is an outside neuron \\n2. all its weights are exactly zero\\n3. based on our clustering vector representation $(\\\\frac{b_k v_k}{||v_k||^2},\\\\frac{v_k}{||v_k||})$ it should exactly fall into one of the few clusters. So in this vector representation the clusters should not be normal point clouds but each cluster should consist of many neurons that have EXACTLY the same vector representation.\\n\\nThis statement is a quite direct corollary of eq. (1) to (4), which follow from our unpublished work. In the case of shallow neural networks these results directly follow from the published papers we cited. We should formulate and derive this corollary more explicitly. These 3 cases directly motivate the 3 steps of our algorithm:\\n1. For an outside neuron, we came up with a simple idea of how we can replace all the outside neurons together by introducing (or updating) the affine weights without influencing the training loss L.\\n2. In practice, we never run the gradient descent based training algorithm to full convergence and computers use only a finite precision for arithmetic operations. Therefore the neurons do not have exactly zero parameters, but we are very confident that we only \\u201chelp the training algorithm\\u201d by removing the weak neurons since the theory would tell us that they would probably converge to zero for infinite exact training. (also note that our weakness criterion is much more natural from a function space point of view than just summing over all the squared parameters of the neuron)\\n3. In practice, again due to numeric noise and finite training-time, the vector representation of the neurons within one cluster do not exactly agree with each other, but for example in Figure 7 you can see how well they cluster. If the vector representation of the neurons exactly agrees (as they should in theory), we would EXACTLY conserve the learned function by putting the neurons of one cluster into a single neuron (note that the parameters of the neurons can be arbitrarily far away from each other when their vector-representation is exactly the same). Again the extremely small changes of the clustering step in practice can be rather seen as reducing the numerical noise and pushing the behavior of the learned function into the behavior it would obtain in the infinite training limit.\\n\\nWe think that most pruning approaches in literature try to heuristically remove weights that do not damage the training loss L too much. We instead have a highly theory driven algorithm that would be able to remove many neurons without changing the training loss L in the case of a perfectly trained network. In the case of a practically trained network this still holds approximately and our algorithm typically brings the network a bit closer to the perfectly trained network that one would obtain after infinitely long gradient flow without numerical noise.\"}",
"{\"title\": \"Not a clear paper, nor contributons, lack of convincing experiments and positioning with respect to state of the art methods.\", \"review\": \"The paper focuses on defining a new architecture that allows being reduced without significantly affecting the performance.\\n\\nIn short, the paper is not properly written nor well organized; is hard to read with vague contributions and vague positioning with respect to the state of the art. Experiments are not convincing: Toy experiment and minimum experiments in MNIST without a clear comparison to existing neuron pruning algorithms.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Paper is not ready for publication.\", \"review\": [\"Summary:\", \"In this paper, the authors propose a novel algorithm for pruning fully-trained ReLU neural networks. To motivate the algorithm, the authors first introduce a new network architecture with an affine skip-connection at each layer. Then the authors connect it to the theory developed in an `unpublished work`. They show that for such neural networks, the number of units in the original layer can be greatly reduced. The main idea of the proposed algorithm is basically to prune those neurons whose removal will not change the function a lot. To quantify this, the authors turn to the L2 norm of each neuron. Experiments on simple toy data and MNIST are conducted.\", \"Overall, I believe this paper is not ready for publication. So, I vote for rejection.\", \"This paper massively refers to the unpublished work by the author, while the authors only provide only little details about the developed theory in the unpublished work. If this is a concurrent submission to ICLR, the author should still cite it anonymously.\", \"To me, the proposed deep stack network is very similar to the formulation of the highway network in Srivastava et al., (2015).\", \"The experiments are not convincing enough. The authors only conduct experiments on a simple toy dataset and MNIST. The numbers are fairly close to each other. To show the statistical significance, some measures, such as one standard error should be provided. I would encourage the authors to at least conduct some experiments on CIFAR datasets.\"], \"typos\": [\"Exactly speaking, the learned function will not be the same same as before ->Exactly speaking, the learned function will not be the same as before\", \"Therefore the the function optimizing eq. (2) can be represented by finite number -> Therefore the function optimizing eq. (2) can be represented by a finite number\", \"Srivastava, Rupesh Kumar, Klaus Greff, and J\\u00fcrgen Schmidhuber. \\\"Highway networks.\\\" arXiv preprint arXiv:1505.00387 (2015).\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper considers a functional regularization form of neural network training problems to prune networks. There are significant issues in the presentation and clarity.\", \"review\": \"The authors leverage a functional regularization reformulation of neural network training problems to prune networks via a reduction algorithm. They present limited experimental evidence showing that the reduction algorithm reduces the number of neurons without sacrificing too much accuracy. \\n\\n\\nMajor comments/questions\\n\\n1. Clarity and correctness\\nThere are significant issues in the presentation and clarity. The authors use footnotes to explain important concepts, but many definitions are missing. The material in the footnotes can be included in the main text with a more natural flow. The main observation in Section 4.1 is not presented as a rigorous result, which appears to be the most interesting result. Remark 4.1 can also be presented as a theorem.\\n\\n2. Insufficient comparisons with the baselines.\\n There is extensive literature in pruning and sparsification of network layers. In Table 1 and Table 2 there is no comparison with standard baselines in pruning. Does the proposed method perform better than standard pruning based on weight magnitude/gradient norm/Hessian based metrics? \\n\\n3. Figure 6 is not very informative. It would be better to zoom in the relevant portion of the plot. Focusing on a few examples instead of seven different examples would make a better display.\\n\\n4. It is not clear why the authors consider the bottleneck architecture in Figure 4 and 5. Is the bottleneck required for the theory behind pruning or reducing overfitting?\\n\\n5. Section 3 starts with 'the authors have shown in an unpublished paper\\\". Is this referring to Maennel et al, 2018? Similar results also appeared in other papers (e.g. Savarese et al. 2019. Please provide a reference or proof for the equivalence of (1) and (2). The proof is in fact straightforward, we only need n_j>=d+1 to hold for Caratheodory's theorem. The authors can be more precise for the required width n_j. This is a significant omission.\\n\\n6. On page 3, footnote 1, P(f) is not properly defined and it's not clear what P(f)=\\\\infty means. This can be clarified by providing necessary references noted above.\\nFurthermore in eq (1), NN_\\\\theta is not properly defined. Is this a standard relu network? \\n\\n7. In the introduction, the authors claim that the proposed method preserves the network output exactly as opposed to other pruning methods. However, in Section 4.2 and 4.3, the authors also resort to approximation methods involving magnitude based pruning and clustering, which are standard in the literature. The observation from Section 4.1 is also not exactly applied and yields an approximate neural network.\\n\\n8. What would be the computational complexity of looping through every neuron and the proposed approximation? Is there a way to justify this approximation?\\n\\nMinor comments\\n1. The manuscript needs a careful proofreading since it contains lots of typos and grammatical errors.\\npage 1. It's architecture -> Its architecture.\\npage 1. authros -> authors\\n\\n2. There some definitions which need further explanation\\npage 1. I believe what is meant by a 'large layer' is a wide layer.\\npage 1. Could you please clarify what sparsity refers to in \\\"reduce the number of neurons by 90% to 99% without introducing sparsity\\\"? \\npage 3. bottleneacks->bottlenecks\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Pruning technique for ReLU networks with insufficient validation and derivation\", \"review\": \"The paper suggests a pruning technique specific to ReLU networks by taking advantage of activation patterns and the separating hyperplanes. The technique consists of three steps: : (1) remove neurons that are never active and combine neurons that are always active, (2) remove neurons with little contribution to the output, and (3) use weighted k-means to combine other neurons. The method is evaluated on specific toy problems and the MNIST dataset.\\n\\nThe paper proposes a novel technique to reduce neurons in a ReLU network. The method to combine neurons takes the hyperplane arrangements (where activations of neurons change) into account and leads to much smaller networks with equal performance in the experiments.\\n\\nThe three major problems of the paper are that it lacks motivation of the proposed technique, it contains an insufficient experimental evaluation and important parts of the paper cannot be reviewed either due to a reference to unspecified, unpublished work or a lack of a derivation.\\n\\nSince the correctness of the paper cannot be evaluated and the technique is insufficiently validated, I recommend to reject.\", \"details_on_weaknesses\": \"- The correctness of the proposed technique cannot be reviewed. \\n(a) Section 3 cannot be confirmed as it refers to results from unspecified unpublished work, i.e., it is impossible to find and read through the unpublished work to estimate its validity. Moreover it is unclear why these results are important for the given paper and how the results are used. The paper states that one should only learn from this entire section that a finite(!) number of neurons is sufficient (which we always have in a practical setting so the conclusion is void?) In any way, either this section is not necessary for the rest of the paper and should be removed, or it is necessary in which case it cannot be verified.\\n(b) The proposed pruning technique consists of three steps (see above in the summary) two of which are trivial: (1)&(2). The third step (3) uses weighted k-means to combine other neurons. There is no explanation, motivation or derivation of the equations how the clusters are combined. (The workings of the method are also surprising, because a cluster containing a single neuron is reduced to a new single neuron in such a way that the function changes, which is counter-intuitive. It seems therefore likely that the equations contain typos, also since they introduce square roots of square roots and it is not defined what is meant by squaring a vector in the function g.)\\nTherefore, the validity cannot be confirmed and the reader must trust the experimental result section. This is unfortunate as there are opportunities to shorten less relevant parts in favor of a derivaiton of the equations.\\n\\n- The experiments are not sufficient. The experiments only consider specific toy problems and the MNIST dataset, which is too simple to showcase the pruning technique. The method is neither compared to any other pruning technique and it is fairly simple to prune networks on MNIST with a similar loss of accuracy. Finally, the method introduces two hyperparameters which are not tested. To validate the performance, both a more complex dataset and the comparison to other pruning techniques are necessary.\\n\\n- The presentation needs improvement. For example, there is a rather long explanation of a seemingly simple architecture and still details are left unclear (Is there a linear layer with weights that are trained, or is the linear skip connection only introduced when pruning the network?) It would be helpful to add an equation for the stack layer and reduce the explanations. The possibly redundant Section 3 could be removed. Instead, the proposed method could be derived and explained. The experiments could be explained in more detail (Why do the plots show the smoothened second derivative?)\", \"typos\": \"Line 4 Motivation \\u201eauthros\\u201c\\nPage 3, footnote, \\u201ebottleneacks has diomension\\u201c\", \"page_4_toward_the_end_of_section_3\": \"\\u201ethe the\\u201c\", \"page_5\": \"the first sentence is not a sentence\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Not novel, below average on most aspects\", \"review\": \"The paper describes a way to reduce the dimensionality of a deep ReLU network. In general, the paper is not well written and hard to follow. They keep referencing \\\"unpublished work of the authros\\\" although its use in practice is not very clear.\\n\\nPractically, they prune a deep network by (a) removing dead ReLU neurons; (b) combine neurons for which the ReLU always acts as the identity. None of these two ideas is particularly novel, especially considering the huge amount of literature to be found on network pruning. Experiments are only done on an artificial dataset and on MNIST.\\n\\nMany parts of the paper are poorly described. For example, their \\\"stack network\\\" is simply a residual network with affine projections on the residual link. A \\\"P-FUNCTIONAL\\\" is not defined. The link between Eq. (4) and their algorithm is not clear.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
VYfotZsQV5S | MISSO: Minimization by Incremental Stochastic Surrogate Optimization for Large Scale Nonconvex and Nonsmooth Problems | [
"Belhal Karimi",
"Hoi To Wai",
"Eric Moulines",
"Ping Li"
] | Many constrained, nonconvex and nonsmooth optimization problems can be tackled using the majorization-minimization (MM) method which alternates between constructing a surrogate function which upper bounds the objective function, and then minimizing this surrogate. For problems which minimize a finite sum of functions, a stochastic version of the MM method selects a batch of functions at random at each iteration and optimizes the accumulated surrogate.
However, in many cases of interest such as variational inference for latent variable models, the surrogate functions are expressed as an expectation. In this contribution, we propose a doubly stochastic MM method based on Monte Carlo approximation of these stochastic surrogates.
We establish asymptotic and non-asymptotic convergence of our scheme in a constrained, nonconvex, nonsmooth optimization setting. We apply our new framework for inference of logistic regression model with missing data and for variational inference of Bayesian variants of LeNet-5 and Resnet-18 on respectively the MNIST and CIFAR-10 datasets. | [
"nonconvex",
"optimization",
"stochastic",
"sampling",
"MCMC",
"majorization-minimization"
] | Reject | https://openreview.net/pdf?id=VYfotZsQV5S | https://openreview.net/forum?id=VYfotZsQV5S | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"5xE4U2JcMBy",
"6BQbendknnZ",
"jTo4SZ4frwi",
"cMLYy5a7HJ2",
"kTXNnCmMG89",
"5c8C8Wa-S-2",
"7WMCspilEAL",
"ikvdmh1Y77D",
"QCk_fNmpO92",
"w0bFEdBZ_D8",
"QmXEmpx0JW"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040507199,
1605288051731,
1605288020500,
1605287989319,
1605287959334,
1605287909036,
1604889204885,
1604009339366,
1603904901213,
1603899887505,
1602890234350
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3698/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3698/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3698/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3698/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3698/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3698/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3698/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3698/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3698/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3698/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper extends the Majorization Minimization principle, particularly the MISO method, to problems where the surrogate of randomly selected batch functions are intractable, such as in formulations rising from Variational Inference. There is a large gap between the reviewers' evaluations even after the author rebuttal and discussions. While the strength of the proposed method looks to be its generality, the main criticism from the reviews are the limited applicability and less convincing arguments and empirical evidences against alternatives such as Monte Carlo versions of popular adaptive stochastic optimization methods. Weighing these considerations and considering strengths of other submissions on similar topics, I have to recommend rejection of the paper at the current form.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"We thank the reviewer for his/her critical, yet very helpful comments. Below, we try our best to address your concerns on the significance of the results of our paper.\\n\\nFirstly, we agree with the reviewer that in the \\\\textbf{special case of quadratic surrogates}, the MISSO update yields to a gradient update very similar to the SAG [Le Roux, Schmidt, Bach, 2012]. Yet, the authors would like to draw attention on a subtle difference here. In the MISSO method, since the minimization occurs on the aggregate sum of the stochastic surrogates, a simple derivation of the minimization of quadratic functions gives this term as being equal to the mean of the past $n$ iterates (i.e. $1/n \\\\sum_{i=1}^n \\\\theta^{\\\\tau_i^k}$). Whereas in SAG, as any variance reduction technique, the contribution is in the drift term (constructed through incremental update) leaving the first term unchanged vis-a-vis SGD as equal to the last iterate (i.e., $\\\\theta^{k-1}$).\\nOf course when the user designed stochastic functions are no longer quadratic, the parallel with any stochastic gradient methods is no longer available, thereby making our framework more general.\\n\\nSecondly, we are glad that the reviewer has brought up the issue on sample complexity, which is an important metric that we have missed in the first version. We give below a clarification on the said issue. As the reviewer suggested, we limit ourselves to smooth optimization and study the number of samples needed to attain a stationary solution with $|| \\\\nabla L( \\\\theta) ||^2 \\\\leq \\\\epsilon$. In this setting, the complexity for the naive SGD method described by the reviewer has a complexity of $O( nL / \\\\epsilon^4)$. \\n\\nOn the other hand, for MISSO, we note that the stationarity metric in (16) satisfies $|g_{-}(\\\\theta)| = || \\\\nabla L(\\\\theta) ||$. As such, to make a fair comparison with the above, it is important to consider $|g_{-}(\\\\theta)|^2$ and compare the number of iterations needed to attain a $\\\\epsilon$ stationary solution should be $O( \\\\Delta / \\\\epsilon)$, where $\\\\Delta$ is defined in Theorem 1. \\n\\nNow, in order to keep $\\\\Delta \\\\asymp nL$, we need to set $M_k = k^2 / n^2$, therefore this yields the sample complexity of $\\\\sum_{k=1}^{K_{max}} M_k = (1/n^2) \\\\sum_{k=1}^{ O(nL/\\\\epsilon) } k^2 = O( n L^3 / \\\\epsilon^3 )$. \\nIn conclusion, we found that the *sample complexity* for MISSO is also lower in terms of the dependence on $\\\\epsilon$, though it comes at a price of an additional $L^2$ term. \\n\\nIn summary, we have:\\n\\n*Iteration Complexity:* According to Theorem 1, MISSO requires $K = {\\\\cal O} (n L/\\\\epsilon)$ iterations to ensure $|g_-( \\\\theta^{K} )|^2 < \\\\epsilon$ ($\\\\epsilon$-stationarity).\\nWhereas, for the naive algorithm proposed by the reviewer, with batch setting it requires $K= L/\\\\epsilon^{2}$ iterations to get $\\\\epsilon$-stationarity.\\n\\n*Sample Complexity:* For the naive method, the sample complexity of $O( nL/\\\\epsilon^4 )$ holds.\\nYet, for MISSO, if we set $M_k = k^2/n^2$ such that $\\\\Delta$ is of order $\\\\mathcal{O}(nL)$, the sample complexity becomes $\\\\sum_{k=0}^{nL/\\\\epsilon} k^2/n^2 = (1/n^2)*(nL/\\\\epsilon)^3 = nL^3 / \\\\epsilon^3$. In comparison with the proposed method ($nL/\\\\epsilon^4$), we sacrifice $L^2$ to win an order of $\\\\epsilon$.\\nWe will include the above calculations in the revised paper. Nevertheless, the authors are happy that the reviewer has raised the above concerns, which led us to further examine the benefits of the MISSO method.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for his/her comments. Below, we try our best to address your concerns on the originality of our paper.\\nWe want to stress that the paper's significance lies on the generality of our incremental optimization framework, which tackles a constrained, non-convex and non-smooth optimization problem. \\nThe main contribution of this paper is to propose and analyze a unifying framework for a large class of optimization algorithms which includes many well-known but not so well-studied algorithms.\\n\\nThe major idea here is to relax the class of surrogate functions used in MISO [Mairal, 2015] and to allow for intractable surrogate that can only be evaluated by Monte-Carlo approximations.\\nWe provide a general algorithm and global convergence rate analysis under mild assumptions on the model and show that two examples, MLE for latent data models and Variational Inference, are its special instances. Importantly, our convergence analysis applies to *both* applications for which analysis are lacking in the current literature. \\nThe major proof idea here is to relax the class of surrogate functions used in MISO [Mairal, 2015] and to allow for intractable surrogate that can only be evaluated by Monte-Carlo approximations. Working at the crossroads of Optimization and Sampling constitutes what we believe to be the novelty and the technicality of our results.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for his/her interest in our paper. Below we address your concerns about our weaknesses.\\n\\nFirstly, we believe that the research direction on second order surrogates is an interesting one. Particularly, papers such as K-FAC (\\\"Optimizing Neural Networks with Kronecker-factored Approximate Curvature\\\" by Martens and Grosse) would be an interesting comparison, as it involves using an approximation of the Hessian matrix. We believe that the MISSO method can include ideas similar to K-FAC as a special case. In particular, it can be done by considering a *scaled* quadratic surrogate function, e.g., for the VI example in (11), we may replace the last term by $|| \\\\bar{\\\\theta} - \\\\theta ||_{H}^2$, where $H$ is an approximate Hessian. In addition, we also refer to the recent work on \\\"IQN: An incremental quasi-Newton method with local superlinear convergence rate.\\\" by Mokhtari, Eisen, and Ribeiro, in SIOPT, 2018, where a BFGS like method using memorized quantities to reduce the variance of stochastic approximations is applied to the problem of stochastic optimization leveraging quasi-Newton functions. Their work is on (a) convex and strongly convex functions and (b) deterministic surrogates. \\n\\nSecondly, we notice that the MM algorithm is in fact very general where the only requirement on the surrogate function is that it satisfies H1,H2 (on page 2), and all our analysis afterwards will follow. As discussed in (Mairal, 2015), we believe that the MM is sufficiently general to be considered relevant to the ML community. Due to the limitation of space (in the main paper), we have only explicitly stated the quadratic surrogate function used for the VI example. However, in Example 1: MLE example via stochastic EM, a different surrogate function is actually used under the hood, and the reviewer is referred to section B.2 for a detailed discussion. Finally, we notice that convergence in expectation is commonly established in the optimization literature and our notion of convergence is standard. Furthermore, we notice that the result can be easily converted to that of high probability convergence using the classical Markov inequality.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We are grateful for the reviewer who has found the studied subject on MM methods interesting. We too believe that the MM method is a versatile framework for tackling modern ML problems and deserve more attention in this community.\\n\\nIn our experiments, we found that the tested methods involve a similar number of gradient computations per iteration (since reported every epoch), as such the wall clock time per iteration are comparable. To further support this comment, in the revised version, we provide in the appendix additional comparison of the convergence against the running time. \\nAlso, MC-ADAM seems to outperform MISSO for variational inference\\nWe agree with the reviewer that MC-ADAM seems to outperform MISSO on the variational inference example. Indeed, we must acknowledge that while our MISSO scheme does not beat the SOTA (such as MC-ADAM) in every example. However, we should emphasize that the goal of this paper is to propose a simple yet general incremental optimization framework which encompasses several existing algorithms for large-scale data. In particular, the same framework also applies to other learning problems such as MLE with stochastic EM (Example 1). \\n\\n-- \\\" link of the contributed method to the follow up work by Mairal and colleagues, Stochastic Approximate MM (Mensch et al 2017)\\\"\\n\\nWe believe the reviewer makes a reference to \\\"Stochastic Subsampling for Factorizing Huge Matrices\\\" by Mensch et. al. (https://arxiv.org/pdf/1701.05363.pdf). In this paper, the authors focus on the problem of matrix factorization for the purpose of dictionary learning. In the particular and challenging case of sparsity and high dimensional matrices (typical in fMRI data), the authors propose a stochastic MM scheme. The level of stochasticity occurs in the index sampling step (sampling subset of dimensions), see first step in Algorithm 3 in their paper, then compute the parameters of the surrogate function leveraging a deterministic (no added stochasticity in this step) Robbins-Monro type of update.\\nRather, in our work, two levels of stochasticity are at stake. The first one is similar as theirs, i.e. the sampling of individual indices, and the second one deals with the Monte Carlo approximation of the intractable surrogate functions (written in our illustrative examples as expectations). The theoretical and practical study of a doubly stochastic as MISSO constitute the main contributions of our paper compared to the mentioned reference, while sharing similar assumptions on the model such as smoothness and existence of directional derivative (see their assumptions (D) and (E)).\\nIn the revised version, we will include a discussion on the above mentioned works.\"}",
"{\"title\": \"Response to AnonReviewer5\", \"comment\": \"We thank the reviewer for his/her critical comments. However, we believe that our paper does contain a number of important contributions, as we try to explain below\\n\\nAs rightfully mentioned, the M-step (line 8 of Alg. 2) can be as costly as the batch MM method *in the worst case*. \\nHowever, in many cases, the MISSO algorithm can take advantage of the (stochastic) surrogate function's structure such that the M-step can be performed in low complexity. Notice that having an easy-to-optimize surrogate function is often a major advantage with MM methods.\", \"let_us_first_consider_example_2\": \"the VI example (in p.4), the surrogate functions are quadratic approximations of the likelihood functions. Here, the M-step updates can be derived in closed form involving a running average of the previously drawn parameters and samples (e.g., it has a similar form as popular methods such as SAG, SAGA). Thus, the complexity of line 8 is similar to any stochastic method, i.e., independent of $n$\", \"likewise_for_example_1\": \"MLE with stochastic EM (in p.3), where, since the complete log likelihood belongs to the curved exponential family, the opaque M-step (line 8) is expressed w.r.t the sufficient statistics, see Section B.3 of the supplementary material. Hence, the M-step actually leverages the incremental characteristic of MISSO since the stochastic sufficient statistics used in the M-step are also incrementally updated.\\nThe advantage of our incremental method, MISSO, lies in the use of line 7 where only a mini batch of stochastic surrogates is updated, with the majority of surrogates being unchanged. \\nFor your suggestion of having updates that \\\"only find the minimizer of stochastically picked individual surrogate function\\\", we believe that this will lead to an algorithm similar to plain SGD instead of the incremental stochastic methods like MISSO. To see this, we can consider employing a stochastically picked (say the $i_k$th function) quadratic, deterministic surrogate expanded around $\\\\theta^k$, then we update $\\\\theta^{k+1}$ as a convex combination of $\\\\theta^k$ and the minimizer to the surrogate. It can be shown that\\n$\\\\theta^{k+1} = \\\\theta^k - \\\\gamma_k (1/L) \\\\nabla L_{i_k}( \\\\theta^k )$\\nwhere $\\\\gamma_k \\\\in [0,1]$ is the convex combination weight and we have used that the surrogate's minimizer is $\\\\theta^k - (1/L) \\\\nabla L_{i_k}( \\\\theta^k )$. \\nIn this way, the algorithm cannot take advantage of the finite sum structure of the optimization problem and the convergence rate can be worse \\n\\nWe emphasize that there are two sources of stochasticity in our MISSO framework (and settings in general). Not only is the individual surrogate function stochastically picked (as stated by the reviewer), but the latter is also approximated by Monte Carlo sampling. This double level of stochasticity makes most of the existing theoretical convergence proofs inapplicable for our work. \\nWe notice that in addition to Th.2 which establishes the asymptotic convergence of MISSO, our Th.1 also provides the non-asymptotic convergence rate for MISSO. We must stress that compared to Th.1, the existing results on stochastic proximal gradient algorithms are not sufficient to provide such almost sure convergence guarantee. Indeed, we shall recall to the reviewers that our MISSO framework is strictly more general than stochastic gradient method as it also includes, among others, EM-like algorithms that can not always be casted as gradient methods.\\nAs such, we respectfully disagree with the reviewer that our \\\"convergence follows by existing literature on stochastic proximal gradient\\\" due to the vast difference between MM methods and standard gradient-based method. In particular, although stochastic proximal gradient methods can be regarded as a special case of MISSO, the latter method is very different and requires different analysis tools. As a comparison, in the recent work on \\\"Proximal-proximal-gradient method\\\" by Ryu and Yin, Journal of Computational Mathematics, 2019, the authors proposed a general framework to unify proximal gradients algorithms and cast them as MM methods (for deterministic and convex objectives), yet the method uses stepsizes of order 1/L rather than n/L. \\n\\nWe have indeed derived the non-asymptotic convergence rate for our MISSO method in Th.1. \\nIt gives a global rate on (1) the gradient of the gap between the surrogate and the objective function and (2) the stationary condition (eq. (14))\\n\\nTh.1 explicitly shows the convergence of MISSO \\\"at a sublinear rate $\\\\mathbb{E}[ g_-^{(K)} ] \\\\leq {\\\\cal O}( \\\\sqrt{1 / K_{\\\\sf max}} )$\\\". Hence, MISSO requires ${\\\\cal O} (n L/\\\\epsilon)$ iterations to ensure $||g_-( \\\\theta^{K} )|| < \\\\epsilon$ (as a definition of $\\\\epsilon$-stationarity for *constrained* optimization). Notice that the obtained rate of convergence is comparable to existing algorithms on non-smooth optimization that rely on processing a full batch of data at each iteration. In the revision, we also discuss the sampling complexity of MISSO\"}",
"{\"title\": \"Reviewer 5's Report\", \"review\": \"This paper develops a stochastic MM-type algorithm to minimize a finite sum. Essentially, the stochastic method draws one sample at each iteration, and find a majorization surrogate for the corresponding loss, and find the minimizer for the updated total loss.\\n\\nOverall, I don't find the paper well-developed and doesn't meet the bar of a top conference like ICLR for the following major concerns:\\n\\n1. The major flaw is that in each iteration, the algorithm requires us to find the minimizer of the updated total loss (Step 8 of algorithm 2). This step is computationally as expensive as the update step in a batched MM algorithm. For a stochastic-type algorithm, I would expect the update only finds the minimizer of the stochastically picked individual surrogate function.\\n\\n2. By minimizing a stochastically picked individual surrogate function, the convergence follows by existing literature on stochastic proximal gradient method, there Theorem 2 follows without much difficulty.\\n\\n3. The convergence rate of the proposed method is not derived, which shouldn't be too difficult to derive.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting theory, though the practical relevance is unclear\", \"review\": \"This manuscript contributes a stochastic optimization method for finite sums where the loss function is itself an intractable expectation. It builds upon stochastic majorization-minimizations methods, in particular MISO, that it extends to use Monte-Carlo approximation of the loss.\\n\\nI am happy to see some attention put to the majorization-minimizations methods, which have many interesting benefits. The paper contributes nice theoretical results, in particular non-asymptotic results. However, I believe that these theoretical results are not enough to situate the contribution with regards to the wider landscape of optimization methods for machine learning.\\n\\nIn this respect, the empirical study is crucial, however it is not completely convincing. Expressing figures 1 and 2 as a function of the number of epoch, rather than as an estimate of runtime is not meaningful: it discards the cost of running the inner loop, which varies from one approach to another. It would leed to believe that MISSO50 is the best option, which is probably not the case.\\n\\nAlso, MC-ADAM seems to outperform MISSO for variational inference\\n\\nWith regards to the broader contribution, it is very appreciable to have a wider theory of stochastic optimization with MM methods. It would have been good, however, to have a discussion of the link of the contributed method to the follow up work by Mairal and colleagues, Stochastic Approximate MM (Mensch et al 2017).\\n\\n\\n**Additional comments after the discussion**\\n\\nThe authors have thoroughly replied to all the comments from the various reviewers.\\n\\nAfter reading all the discussions (other reviews as well as replies from the authors), it appears to me that the practical relevance of this contribution is not completely clear. The computational cost of each iteration is large. The benchmarks do not show clear improvements in computational.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A good paper\", \"review\": \"This paper propose a doubly stochastic MM method based on Monte Carlo approximation of these stochastic surrogates for solving nonconvex and nonsmooth optimization problems. The proposed method iteratively selects a batch of functions\\nat random at each iteration and minimize the accumulated surrogate functions (which are expressed as an expectation). They establish asymptotic and non-asymptotic convergence of the proposed algorithm. They apply their method for inference of logistic regression model and for variational inference of Bayesian CNN on the real-word data sets.\\n\\nWeak Points.\\nW1. The authors do not discuss the connections with state-of-the-art second-order optimization algorithms such as K-FAC.\\nW2. The proposed algorithm still falls into the framework of MM algorithm and a simple convex quadratic surrogate function is considered. The convergence rate of the algorithm is expected.\\n\\nStrong Points.\\nS1. The proposed method can be viewed as a combination of MM and stochastic gradient method with variance reduction, which explains its good performance. \\nS2. The paper contains sufficient details of the choice of the surrogate function and all the compared methods in the experiments.\\nS3. The authors establish asymptotic and non-asymptotic convergence of the proposed algorithm. I found the technical quality is very high.\\nS4. Extensive experiments on binary logistic regression with missing values and Bayesian CNN have been conducted.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A stochastic optimization method with surrogate functions from Monte Carlo samples is developed\", \"review\": \"This paper proposed MISSO, which is an extension of MISO to handle surrogate functions that are expressed as an expectation. MISSO just used the Monte Carlo samples from the distribution to construct objectives to minimize.\\n\\nIt seems to me that MISSO is just a straigforward extension of MISO, also the empirical results seems to suggest the proposed MISSO has no advantage over Monte Carlo variants of other optimizers, such as MC-SAG, MC-ADAM, thus it is not clear to me what is the significant aspect of this work.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"Marginally above the acceptance threshold\", \"review\": \"1. Summarize what the paper claims to do/contribute. Be positive and generous.\\n\\nIn this paper, the authors consider solving the optimization of the summation of a finite number of component functions. The proposed algorithm is based on a previous work called Minimization by Incremental Surrogate Optimization (MISO). The MISO is a majorization minimization algorithm, which shares a similar update style of the SAG method. However, different from SAG, whose convergence is not available for nonconvex optimization, and is even very tricky in convex case, MISO enjoys a global convergence guarantee due to its majorization property. Based on this existing method, for the problems whose majorization surrogate is very hard to construct, e.g. variational inference of latent variable models, the authors of this paper propose a sample average approximation of the exact majorization surrogate function. The convergence of the proposed algorithm is also provided in this paper. \\n\\n2. Clearly state your decision (accept or reject) with one or two key reasons for this choice.\\nThis paper is marginally below the acceptance threshold. \\n\\n3. Provide supporting arguments for the reasons for the decision. \\n\\n(i). (Weakness) For the hard cases where each component is an expectation itself, the strategy applied here is to do a simple sample average approximation. This requires the sample size of in each iteration (M_k) to satisfy the condition that \\\\sum_k M_k^{-1/2}<\\\\infty. That is, in the $k$-th iteration, the sample size will be at least k^2. According to Theorem 1, the number of iteration should be K\\\\geq nL/\\\\epsilon^2. Consequently, the total sample complexity of this method seems to be \\\\sum_{i=1}^{K} k^2 ~ n^3L^3\\\\epsilon^{-6}. The $n^3L^3$ dependence seems very bad. However, let us do a simple estimation of a naive method: 1. In each step compute the \\\\epsilon-accurate estimation of the gradient for each component, this needs O(n \\\\epsilon^{-2}) samples per iteration. Then if the function is L-smooth (this paper can handle nonsmooth cases) then the total iterations will be O(L\\\\epsilon^{-2}). Then the total sample complexity seems only O(nL\\\\epsilon^{-4}). This might need some clarification. \\n\\n(ii). (Strength) This paper provides a non-asymptotic rate of convergence for the MISSO algorithm, which implies a non-asymptotic rate for the MISO method, whose non-asymptotic rate is not known before, which should be appreciated. Moreover, the numerical experiment in this paper is well presented. \\n\\n4. Provide additional feedback with the aim to improve the paper. Make it clear that these points are here to help, and not necessarily part of your decision assessment.\\n\\n(i). The MISSO (and MISO) share a similar updating style with SAG, it will be better if the authors could add some discussion on their relation and difference. Or, if such discussion exists in other literature, add a reference to that. \\n\\n(ii). After the Theorem 2. It may make sense to give the sample complexity of the result. Namely, to get the optimality measure \\\\leq \\\\epsilon, how many sampled are needed. Specifically, by the reviewer\\u2019s rough estimation, the dependence on n and L is O(n^3L^3), see my argument before, this dependence is not reasonable. My question is that can the authors carefully balance the parameters and derive a more reasonable sample complexity? If the O(n) and O(L) dependence can be achieved, the reviewer is willing to change to a higher score.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
ZDnzZrTqU9N | Modeling the Second Player in Distributionally Robust Optimization | [
"Paul Michel",
"Tatsunori Hashimoto",
"Graham Neubig"
] | Distributionally robust optimization (DRO) provides a framework for training machine learning models that are able to perform well on a collection of related data distributions (the "uncertainty set"). This is done by solving a min-max game: the model is trained to minimize its maximum expected loss among all distributions in the uncertainty set. While careful design of the uncertainty set is critical to the success of the DRO procedure, previous work has been limited to relatively simple alternatives that keep the min-max optimization problem exactly tractable, such as $f$-divergence balls. In this paper, we argue instead for the use of neural generative models to characterize the worst-case distribution, allowing for more flexible and problem-specific selection of the uncertainty set. However, while simple conceptually, this approach poses a number of implementation and optimization challenges. To circumvent these issues, we propose a relaxation of the KL-constrained inner maximization objective that makes the DRO problem more amenable to gradient-based optimization of large scale generative models, and develop model selection heuristics to guide hyper-parameter search. On both toy settings and realistic NLP tasks, we find that the proposed approach yields models that are more robust than comparable baselines. | [
"distributionally robust optimization",
"deep learning",
"robustness",
"adversarial learning"
] | Accept (Poster) | https://openreview.net/pdf?id=ZDnzZrTqU9N | https://openreview.net/forum?id=ZDnzZrTqU9N | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"JxUwMrlaF_",
"El97FtNW5px",
"HmXGoOD5UBu",
"RIys1HD6GvM",
"F2YE04--bwI",
"2z4fe6BU6k1",
"qwk-PlR-HbT",
"5djV-tPZBwm",
"JiQpK5au4HY",
"oNpu4d_vs_D",
"6VAPqt0gfSC",
"4w0INndPHbV",
"J09zN4mXuM1",
"61muPjX9ACD",
"r0JSBbD5a6P",
"KRBnn7t612t",
"-6XcaSqdJt5",
"HOSukVouhlc"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040393109,
1606268057755,
1606248698852,
1606142629038,
1606140213331,
1605916528130,
1605870666449,
1605720210274,
1605632366337,
1605285311208,
1605283252908,
1605283165171,
1605282897112,
1605282623494,
1604279416517,
1603899957034,
1603559357536,
1603481338638
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3696/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3696/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3696/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3696/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3696/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3696/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3696/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3696/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3696/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3696/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3696/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3696/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3696/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3696/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3696/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3696/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3696/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"# Paper Summary\\n\\nThis paper considers the problem of distributionally robust optimization (DRO), in which one is attempting to minimize a loss on the worst of all distributions that are some distance (here, measured in terms of KL divergence) from the training set. The main novelty here is that this adversarial distribution is represented as a model, with parameters that are learned jointly with the primary model.\\n\\nThis is an intuitive idea, but as the authors explain, attempting to implement it leads to a number of complications. One of these is that it is challenging to constrain the adversarial distribution model to be a certain KL divergence away from the training set. To address this, they write down the Lagrangian, but do not actually optimize over the Lagrange multiplier resulting from this constraint: instead, they keep it at a fixed constant value (a hyperparameter). A second, and potentially more worrisome, issue is that it is difficult to optimize the KL divergence as written--instead, they swap the two parameters, which is of course incorrect but they claim leads to much nicer convergence behavior.\\n\\nThey also propose a stopping condition, which terminates optimization once the robust validation loss (i.e. the validation loss w.r.t. the worst permissible distribution) stops decreasing. Normally, this would require a search for the worst such distribution at every iteration, which would be prohibitively expensive, so they propose instead only checking the distributions that have been found by the adversary during the course of optimization.\\n\\nThey close with a set of experiments that is nicely designed to narrow in on and explore particular details of their approach (e.g. they have an experiment that validates their stopping criterion), and have a realistic experiment on two NLP datasets.\\n\\n# Pros\\n\\n1. Reviewers agreed that it was very well-written, well-organized, and comprehensive\\n1. Good discussion of background material. The paper is very accessible\\n1. Intuitive idea, although the details of the approach become somewhat complex\\n1. Aside from the \\\"realistic\\\" experiment, each is designed to explore a particular facet of their approach\\n\\n# Cons\\n\\n1. Some reviewers were concerned that the baselines were insufficient. In response the authors added the new Hu et al. baseline (NonParam), which seemed to be satisfactory\\n1. While the approach is more general, one reviewer noted that the experiments only consider NLP problems. This is a minor negative point, in my view\\n1. One reviewer was concerned that the results were \\\"too good\\\", and encouraged the authors to double-check their results. My belief is that, at least on the non-\\\"realistic\\\" experiments (which were mostly intended to drill down into specific attributes of their approach, rather than demonstrate its overall performance), this is because the problem was constructed to perform especially poorly with a non-DRO approach\\n1. One reviewer was unsatisfied with the idea of swapping the parameters to the KL divergence (I share this concern). The authors clarified, both in the response and in the paper, that swapping the parameters is indeed incorrect, and may in fact be a very bad approximation to the true quantity of interest, but that the performance difference was so dramatic that it couldn't be undone. This seemed to partially satisfy the reviewer\\n\\n# Conclusion\\n\\nAll four reviewers ultimately recommended acceptance. The major concerns were (i) that the baselines weren't good enough (which the authors addressed by adding a new baseline), and (ii) that swapping the parameters to the KL divergence results in a very poor approximation to the original KL divergence (which the authors now explicitly acknowledge in the paper, with an explanation for why they feel it is necessary). Overall, this is a nice idea, and while bringing it into practice may require more hand-waving than would be ideal (which is the main reason I suggested a poster acceptance instead of a spotlight or oral), it seems to work well experimentally, and the experiments are overall very careful and well thought-out. Additionally, the writing quality is excellent, as is the organization and presentation of background material.\"}",
"{\"title\": \"Final comment\", \"comment\": \"Thanks to the reviewers making the effort of responding to the reviewers' concerns in-depth. The additional experiments were also a good idea. One reservation though: the gains of the proposed methods compared to the baselines (see response to all reviewers) seems too high. This kind of disproportionate performance is usually due to one of two things: (1) An issue a specific regime where the proposed methods are particularly better than the rest, and / or (2) A bug in the evaluation pipeline / protocol. I'd advise the authors to double-check. In the benefit of doubt, I'm increasing my score to 7.\"}",
"{\"title\": \"Final revision\", \"comment\": [\"We again thank all four reviewers for their feedback, and in particular R1 for the insightful discussion.\", \"We updated the paper to incorporate the reviewers\\u2019 comments. Here is a summary of the changes:\", \"Added another baseline: non-parametric KL-constrained DRO. This is in response to R3\\u2019s comments on the lack of baselines, and also intended to clear up confusions between parametric and non-parametric approaches to DRO brought up by R2\", \"Clarified the presentation to address R1\\u2019s concerns, specifically by:\", \"Making the distinction between empirical and theoretical distribution more explicit where necessary, in particular in how it relates to estimating the KL divergence\", \"Including a more nuanced discussion of the KL reversal\", \"Included additional experiments in the appendix to visualize the effect of various hyper-parameters, as suggested by R3.\", \"Highlighted appendix C.2 (experiments with a smaller adversary) better in the main text (in response to R3\\u2019s comment)\", \"Included additional references suggested by reviewers: Faury et al. (AAAI 2020; R2), Husain, 2020 and Nguyen et al. 2020 (R1)\", \"Fixed minor typos/presentation issues brought up by R2 and R4\", \"Adjusted a number (Greedy-minmax 30.43 -> 32.17) in Table 2a after a minor bug was fixed in our analysis code.\"]}",
"{\"title\": \"response\", \"comment\": \"Thank you for the extensive response --- As stated in the original review, I agree that the general idea still makes sense and that the paper demonstrates its usefulness.\\n\\nI am delighted that the authors aim to include the explicit distinction between empirical and theoretical KL-uncertainty set because it is of both theoretical and practical relevance. \\n\\nFinally, with this I will also note that the authors have addressed the two concerns I had. Including a more critical discussion of the KL-reversal together with a more explicit notational distinction between idealized and empirical data distributions will fix my main issues with the paper.\"}",
"{\"title\": \"Clarification\", \"comment\": \"Again, we thank the reviewer for taking the time to clarify their concern.\\n\\nWe will attempt to rephrase the order of operations in Section 3.2, and what we believe to be the reviewer\\u2019s core issue. First, for clarity\\u2019s sake, we define the following notation\\n\\n- $p$: the true data distribution and $m$ the empirical distribution\\n- $q^*=q_{\\\\tau,\\\\theta}^*=\\\\frac 1 Z p e^{\\\\ell/\\\\tau}$ and $m^*$ its restriction to the empirical distribution\\n- $q_\\\\psi$: the adversary and $m_\\\\psi$ its empirical counterpart (as defined by the reviewer)\\n\\nIn our intended presentation, Section 3.2 proceeds as follows:\\n\\n1. Write the lagrangian relaxation $-E_{q_\\\\psi}\\\\ell + \\\\tau KL(q_\\\\psi || p) + Constant = KL(q_\\\\psi || q^*) + Constant$\\n2. Reverse the order of the KL: $KL(q^* || q_\\\\psi)$\\n3. Plug-in the empirical distribution: $KL(m^* || m_\\\\psi)$\\n\\nIn particular in the last step, the plug-in of the empirical distribution on the left argument of the KL is standard practice and does not raise issues of absolute continuity. Insofar as one accepts the KL reversal, we believe that the use of the empirical distribution should be acceptable in this formulation.\\n\\nOn the other hand, if we understand correctly, the reviewer reads our order of operation as:\\n\\n1. Write the lagrangian relaxation $-E_{q_\\\\psi}\\\\ell + \\\\tau KL(q_\\\\psi || p) + Constant = KL(q_\\\\psi || q^*) + Constant$\\n2. Plug-in the empirical distribution $KL(m_\\\\psi || m^*)$\\n3. Reverse the order of the KL: $KL(m^* || m_\\\\psi)$\\n\\nIn this case we agree with the reviewer\\u2019s assessment that the transition from 1 to 2 is problematic because of absolute continuity issues. In fact, as far as we can see, there is no easy way to estimate the \\u201ccorrect\\u201d KL $KL(q_\\\\psi || q^*)$ without running into the aforementioned issues with the empirical distribution.\\n\\nEverything considered, the two derivations yield the same final objective (step 3., equation 8 in the paper), so the discussion ultimately comes back to the KL reversal. As we have argued above (and shown empirically in the paper), this approximation, while unsatisfactory, still allows us to train robust models in a tractable fashion.\\n\\nFinally, we agree that this discussion is important, and we will strive to make it clear in the paper, but we don\\u2019t think that it discounts the general idea of the paper, nor the experimental results.\"}",
"{\"title\": \"Importance of adversary's accuracy on performance: results and discussion\", \"comment\": \"We would like to add additional comments with regards to the reviewer's enquiry as to the importance of the adversary's accuracy:\\n\\n\\\\> 1. How good the adversary model needs to be for the proposed method to perform well? In the experiments, an auto-regressive transformer model based on the GPT-2 language model is employed. What is the accuracy of this model on the train dataset of the DRO problem? Will the proposed method performance be too sensitive to the accuracy of the adversary model?\\n\\nWe have replicated the experiments on BiasedSST, using a simple, one-layer LSTM model as the adversary. As it turns out, we had run these experiments already and they were in fact included in the original submission (albeit in the appendix; C.2). The setup is almost exactly the same as that of Section 3, except we only search for the learning rate in order to reduce the number of experiments (moreover, as outlined in our previous response, we find that the other hyper-parameters k and \\\\tau have limited effect). \\n\\nThis adversary achieves lower generative modeling performance than our autoregressive transformer (test perplexity 227.01 vs 49.84) on the biasedSST dataset. Yet, we find that it allows P-DRO to achieve robust test accuracy of 43.68 on biasedSST, which also outperforms all baselines.\\n\\nIn fact, this is even higher than our results obtained with the transformer model. We find that this surprising result can be explained as an effect of the minmax validation strategy. When we remove this factor and choose hyper-parameters and early stopping with robust accuracy instead, we find that the LSTM adversary reaches a robust accuracy of 45.68 versus 47.53 for the transformer adversary, giving the latter a slight edge.\\n\\nIn summary, we find that the size of the adversary as a generative has a limited effect when restricted to neural models. We will clarify the discussion in the paper and make sure to attract more attention to these results in the main text.\"}",
"{\"title\": \"response\", \"comment\": \"Thanks for the swift response!\\n\\nRegarding the second point, that makes sense. I am still dissatisfied with the reasons for suddenly flipping the KL because it fundamentally changes the optimization problem, but I don't have to die on this hill.\\n\\nFor the first point however, I am not sure you are understanding my concern. If you use the empirical measure, then this means that $p_{\\\\tau, \\\\theta}^*$ is supported only on finitely many points. This means that the KL that you *actually* comptue in practice is \\n$$ \\\\sum_{i=1}^n q_{\\\\psi}(x_i) \\\\log(q_{\\\\psi}(x_i) / p_{\\\\tau, \\\\theta}^*(x_i)),$$ \\ni.e. you discretize the density $p_{\\\\tau, \\\\theta}^*$ into a new measure supported only on (finitely-many) support points $x_i$. Defining this discrete measure (for the dirac delta $\\\\delta_x(y)$ being $0$ everywhere except if $y=x$) as\\n$$ m_{\\\\psi}(x)=1/n\\\\sum_{i=1}^n\\\\delta_{x_i}(x) \\\\cdot q_{\\\\psi}(x) $$\\nthis means the uncertainty set that you are *actually* computing is with respect to\\n$$\\\\text{KL}(m_{\\\\psi}\\\\|p_{\\\\tau, \\\\theta}^*) \\\\neq \\\\text{KL}(q_{\\\\psi}\\\\|p_{\\\\tau, \\\\theta}^*)$$\", \"note_that_this_is_not_some_stickler_remark\": \"If the two measures $\\\\nu$ and $\\\\mu$ are not absolutely continuous with respect to one another, then (by definition!) we have that $\\\\text{KL}(\\\\mu \\\\|\\\\nu) = \\\\infty$. In other words, while $\\\\text{KL}(m_{\\\\psi}\\\\|p_{\\\\tau, \\\\theta}^*)<\\\\infty$, the mismatch of support problem means that $\\\\text{KL}(q_{\\\\psi}\\\\|p_{\\\\tau,\\\\theta}^*) = \\\\infty$.\\n\\nFor a definition of the KL in the measure-theoretic sense, see e.g. Def. 359 here:\", \"https\": \"//www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjW2aHA-pDtAhU7QkEAHbTYBWwQFjANegQIYxAC&url=https%3A%2F%2Fwww.stat.cmu.edu%2F~cshalizi%2F754%2F2006%2Fnotes%2Flecture-28.pdf&usg=AOvVaw1tPO-Fq79f_PqetUPJuYfD\"}",
"{\"title\": \"Answer to follow up questions\", \"comment\": \"\\\\> I thank the authors for the extensive answer and appreciate the time they took!\\n\\nLikewise, the reviewer\\u2019s willingness to engage in discussion and clarify their concerns is much appreciated.\\n\\n\\\\> (1) Regarding the finiteness of the KL, [...] What do you do in practice? Whatever you do, it should go into the paper :)\\n\\nIn practice, we reverse the $KL(q_\\\\psi || q_{\\\\tau, \\\\theta}^*)$ to $KL( q_{\\\\tau, \\\\theta}^* || q_\\\\psi)$ which can be re-written ($\\\\mathcal L_{rev}$, Eq. 8) as the ratio of $\\\\mathbb E_p e^{\\\\ell/\\\\tau}\\\\log q_\\\\psi$ and the normalizer $Z_{\\\\tau,\\\\theta}=\\\\mathbb E_p e^{\\\\ell/\\\\tau}$. Both are expectations (over $p$) of random variables which do not depend on $p$ (and hence can be computed directly). In both cases, we indeed plug-in the empirical distribution in place of $p$ in the expectation. This will be clarified in the revision (in Section 3.2 after Eq. 8) by adding the following sentence: \\u201cIn practice, we compute the expectation by sampling from the empirical distribution.\\u201d.\\n\\n\\\\> (2) I have to say that the arguments for reversing the KL are really unsatisfactory. [...] Could you not reverse the direction of the KL in your uncertainty set so that it directly appears in the 'correct direction' in eqs (6) and (7)?\\n\\nThis is an interesting remark. In fact, defining the uncertainty set based on the reverse KL would lead to similar optimization issues, at least without some additional approximations or tricks. The reason for this is that the resulting lagrangian ($\\\\mathbb E_{q_\\\\psi} \\\\ell + \\\\tau E_p \\\\log q_\\\\psi + [\\\\text{Constant in }\\\\psi]$) still contains a term in $\\\\mathbb E_{q_\\\\psi}$ which is difficult to optimize for $\\\\psi$. On the other hand, the reversal of the KL in Eq 7 (leading to the $\\\\mathcal {L}_{rev}$ objective in Eq. 8) allows us to avoid taking an expectation on $q_\\\\psi$ which is the main optimization hurdle. Because of this, our formulation is not directly equivalent to reversing the direction of the KL constraint on the uncertainty set.\"}",
"{\"title\": \"Thanks + follow-up question\", \"comment\": \"I thank the authors for the extensive answer and appreciate the time they took!\\n\\n(1) Regarding the finiteness of the KL, I am afraid your comment does not address my concern completely: Even though your KL-ball is well-defined between true data-generating distribution and the model (provided that both admit densities that are absolutely continuous with respect to one another), there is a remaining problem. Specifically, since you have no access to the *true* data generating mechanism, you will have to approximate the KL-ball with the empirical data distribution for computation. If I understand correctly, eq. (7) will still depend on the empirical distribution ($p(x,y)$) via $q_{\\\\tau, \\\\theta}^*$. This means that the problem would persist---unless you only evaluate $q_{\\\\psi}$ at the finitely many support points of $p(x,y)$. What do you do in practice? Whatever you do, it should go into the paper :)\\n\\n(2) I have to say that the arguments for reversing the KL are really unsatisfactory. Of course it's not your fault that the KL behaves badly when optimized, but it raises the question why you would like to define your objective the way you do. Could you not reverse the direction of the KL in your uncertainty set so that it directly appears in the 'correct direction' in eqs (6) and (7)? I don't see any part of your argument chain that would prevent you from doing that, and saying that \\\"optimizing the forward direction is hard\\\" would directly justify why you are defining the uncertainty set based directly on the reverse KL? Please let me know if this would be impossible, but I don't see why it would be.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for their encouraging feedback. We address their specific concerns below, and we are happy to continue discussing any of these points or answer follow-up questions.\\n\\n\\\\> The experiments are only on NLP tasks\\n\\nWhile in general our proposed approach can be applied to any modality, the reviewer is correct that we only experiment on NLP datasets (except in our toy experiment in Section 4.4). As mentioned in the paper, this is motivated by the widely recognized success of language models, which make them a prime candidate for testing P-DRO.\\nFor other modalities where generative models are either not readily available or can\\u2019t provide normalized probabilities efficiently, such as GANs, an alternative solution might be to model the likelihood ratio q_psi/p directly, however this poses a variety of other challenges, which we defer to future work.\\n\\n\\\\> How good of a generative model is the adversary, and how important is its performance?\\n\\nAs pointed out by the reviewer, in most of our experiments, the adversary is a transformer model based on the GPT-2 architecture (albeit with fewer parameters than the actual GPT-2 model). On the BiasedSST dataset, this model attains a \\u201cperplexity\\u201d of 49.84 (note: this model predicts both label and text, as such the perplexity is not directly comparable to regular language models). Measuring the effect of the adversary\\u2019s performance on the effectiveness of P-DRO is an interesting ablation study. Should time and computing resources permit, we will make our best efforts to obtain additional results with smaller adversaries during the rest of the rebuttal period. \\n\\n\\\\> How are \\\\tau and k chosen in practice?\\n\\nAs shown in Section 4, \\\\tau and k can be chosen via grid-search using the Minmax criterion described in Section 3. For the experiments in Section 5 specifically, we fixed \\\\tau and k in order to reduce the search space and make grid search more manageable. Possibly, better results could be obtained in Section 5 by searching for better \\\\tau and k. We will edit the paper to clarify this.\\n\\nAs to the effect of the choice of k and \\\\tau on performance, we performed an ablation study on BiasedSST. We start from the configuration \\\\lambda=10^-4, k=5 and \\\\tau=0.01 and vary either k or \\\\tau. We report two numbers for each configuration: robust accuracy of the best model using Greedy-Minmax stopping and using Oracle stopping. The latter is useful to disentangle the effect of the stopping criterion.\\n\\n||Robust Accuracy (Minmax stopping)|Robust Accuracy (Oracle stopping)|\\n|-|-|-|\\n|k=0 | 41.98 \\u00b1 4.48 | 49.60 \\u00b1 5.39 |\\n|k=5 | 44.74 \\u00b1 3.24 | 50.43 \\u00b1 5.05 |\\n|k=10 | 32.17 \\u00b1 11.20 | 50.95 \\u00b1 5.01 |\\n|-|-|-|\\n|\\\\tau=0.1 | 39.72 \\u00b1 5.55 | 50.00 \\u00b1 4.98 |\\n|\\\\tau=0.01 | 44.74 \\u00b1 3.24 | 50.43 \\u00b1 5.05 |\\n|\\\\tau=0.001 | 44.74 \\u00b1 3.24 | 50.87 \\u00b1 5.09 |\\n\\nInterestingly, neither k nor \\\\tau have a strong effect on robust performance when using Oracle stopping. We will add these ablation studies to the updated manuscript.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We appreciate the reviewer\\u2019s enthusiasm for our approach, and are grateful for the insightful feedback. We address their specific concerns below, and we are happy to continue discussing any of these points or answer follow-up questions.\\n\\n\\\\> How do we ensure that the KL divergence between q_\\\\psi and the empirical distribution p is finite or even well-defined?\\n\\nThe reviewer makes an keen observation that the empirical distribution, being of finite support, does not have a finite KL divergence with q_\\\\psi. In truth, we are only interested in the KL divergence between q_psi and the true underlying data distribution, which we can reasonably assume is finite. We agree that the phrasing of the paper is confusing in this regard, as we interchangeably refer to both the \\u201ctrue\\u201d data distribution and the empirical distribution (of finite support over the training data) as p. We will edit the paper to make this clearer.\\n\\n\\n\\n\\\\> Why is flipping the KL viable?\\n\\n\\nThe reviewer is correct that the KL is not symmetric, and as such the reversed loss L_rev is not equivalent to the original \\u201cforward\\u201d KL minimization problem. First, we would like to clarify that we did try to optimize the forward-KL constrained objective and found in our toy experiments (Section 4.4) that this generally failed. This failure is echoed by a variety of previous work (eg. RAML (Norouzi et al., 2016), but also in the RL literature). We do agree that flipping the KL divergence is an unsatisfactory approximation (and we will update the paper to further emphasize this point), however as shown empirically in previous work, it seems to be effective in practice.\\n\\nUltimately, we choose to make concessions to the optimization concerns, to the expense of theoretical exactness. We do believe that attempting to directly minimize the forward KL is a promising future direction. However, the current version of the paper demonstrates empirically that the KL reversal is not only viable, but is also sufficient for P-DRO to yield more robust models. If performing P-DRO with the forward KL results in superior results than the results that we have obtained with reverse KL, then we argue that it would only further improve the utility of our already-promising approach.\\n\\n\\\\> Missing references (Husain, 2020 and Nguyen et al. 2020)\\n\\nWe thank the reviewer for pointing out these recent relevant references, which we\\u2019ll include in the upcoming revised version of the paper.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"We thank the reviewer for their encouraging comments and helpful feedback. We address their specific concerns below, and we are happy to continue discussing any of these points or answer follow-up questions.\\n\\n\\n\\\\> Since there is no novel techniques proposed in this paper and there is no performance guarantee for the proposed framework, overall, I think this is a borderline paper due to its limitations in theoretical development and technical novelty\\n\\nWe do agree with the reviewer that the paper does not make a strong theoretical contribution, and our approach is very much motivated by proposing a method that works in a practical scenario rather than what is most satisfying theoretically.\\n\\nThe main novelty of the paper is the use of a parametric family as the uncertainty set in DRO. However, our contribution goes beyond the brute-force approach of simply plugging parametric models into the classical DRO min-max (which doesn\\u2019t work, as demonstrated in our toy experiments in Section 4.4). In particular while a number of the adjustments detailed in Section 3 are not novel by themselves (lagrangian relaxation, KL reversal...), the fact that they can be combined and applied to the problem of parametric DRO is (in the authors\\u2019 opinion) far from being a given.\\n\\n\\n\\\\> More baselines\\n\\nThe reviewer\\u2019s point regarding more baselines is well taken. First, we would like to point out that Wasserstein DRO presumes the existence of a canonical metric on the input space, of which there is none for discrete sequential inputs such as natural language sentences. Adaptation of Wasserstein DRO to NLP is an interesting direction, but it is far from straightforward, and would warrant a more thorough investigation of its own. Huber\\u2019s work on robust statistics solves a related but different setting: ensuring that models are robust to an adversary who modifies the training data. Our paper considers the problem where the training data is fixed, and the test distribution is potentially different. To our knowledge, Huber\\u2019s robust statistics approaches do not directly address KL-robust DRO problems.\\n\\nThat being said, we did run the additional baseline of non-parametric KL-constrained DRO, inspired by the formulation of Hu et al. (2016) (https://arxiv.org/abs/1611.02041). We refer to our general response to all reviewers for more details and initial results. We are currently working towards adding these additional baseline results throughout the paper.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for their detailed feedback. We address their specific concerns below, and we are happy to continue discussing any of these points or answer follow-up questions.\\n\\n\\\\> I don't see just how this model is \\\"parametric\\\"\\n\\nThe proposed approach is parametric in the sense that the confusion set is represented by a parametric family of models. To take the reviewer\\u2019s example, in one of our ablation experiments (Section 4.4), the adversary is indeed a gaussian, and its sought-for parameters are the mean.\\nIn our other experiments, the uncertainty set is composed of transformer-based language models. This setting is also parametric in the sense that each possible test distribution in the uncertainty set is associated with a set of parameters for a transformer model, and these parameters are optimized jointly with the classification model following the procedure described in Section 3.\\n\\nTo clarify this point, we have added results for a non-parametric KL-constrained baseline. Please refer to our general response to all reviewers for more details.\\n\\n\\\\> Comparison to Faury et al. (AAAI 2020) \\\"Distributionally Robust Counterfactual Risk Minimization\\\"\\n\\nWe thank the reviewer for pointing out this relevant citation, which we\\u2019ll include into the next revision. As far as we can tell, this paper proposes a non-parametric KL-constrained formulation of DRO which is very similar to that of Hu & Hong (2013) or Hu et al. (2016), but applied to CRM. In fact, Faury et al. corroborate our point: they state (eg. Section 2.2) that \\u201cWe are interested in DRO instances that are amenable to direct optimization. To this end, we focus here only on uncertainty sets U based on information divergence\\u201d.\\nIn our work, we consider the challenges that occur when moving outside this tractable set of uncertainty sets, and consider intersections of the KL uncertainty set with parametric models where the inner-maximization problem becomes intractable (hence the need for the approximations described in Section 3).\\n\\nAgain, we refer to our general response for more details on an additional, relevant baseline.\\n\\n\\\\> The technical contribution of the paper is negligible (if any)\\n\\nTo the best of our knowledge, we are the first to investigate DRO with neural-network based parametric confusions sets and to address the associated challenges (intractability of the inner-max, difficulty of enforcing the KL constraint...). We believe these are all technical contributions that are not attested to by previous research, and all required a significant amount of thought, design, implementation, and empirical validation.\\n\\n\\\\>Since the paper is supposed to be empirical (see previous points), I would have expected experiments on real datasets.\\n\\nWe would like to point out that the experiments in the final section of the paper are performed on two toxicity detection datasets, which are well established datasets addressing an important real problem and widely used in the community: Davidson et al. (2017) and Founta et al. (2018) (760 and 109 citations respectively according to google scholar).\\n\\n\\\\> In Eq. why not take q_psi_0 and p to equal the empirical distribution (as is usually done) in DRO ?\\n\\nDue to its parametric nature, the support of the adversary q_\\\\psi is larger than that of the empirical distribution, therefore, we need to use the true data distribution p (which we assume has the same support as q_psi) in the denominator. In practice, since p is unavailable, we resort to the MLE q_\\\\psi_0, which also has the same support.\"}",
"{\"title\": \"General Rebuttal\", \"comment\": \"We thank all four reviewers for their feedback. We address each reviewer\\u2019s specific concerns in separate replies, and are happy to continue discussing any of these points or answer follow-up questions.\\n\\nA few reviewers brought up the comparison to the non-parametric KL-constrained approach which is similar to our proposed approach. We ran additional experiments to compare to this approach, inspired by the formulation of Hu et al. (2016) (https://arxiv.org/abs/1611.02041). We slightly adapted the algorithm to our setting (minibatch training of large models, we will outline those modifications in more detail in the upcoming revision of the paper). In particular, we experiment with 4 values for the radius of the KL ball (which controls the size of the uncertainty set): 0.01, 0.1, 1 and 10.\", \"we_report_initial_results_on_biasedsst_for_two_variants\": \"\", \"average\": \"we use average accuracy for both stopping and hyper-parameter selection\", \"minmax\": \"we adapt our proposed Minmax criterion for stopping and hyper-parameter selection.\\n\\nResults are as follows (robust test accuracy):\\n\\n| Method| Robust Accuracy |\\n| --|--|\\n|ERM | 2.15 \\u00b1 0.97|\\n|Topic CVaR | 5.18 \\u00b1 1.46|\\n|Non-param (Average) | 8.51 \\u00b1 4.62|\\n|Non-param (Minmax) | 21.68 \\u00b1 4.85|\\n|P-DRO | 34.98 \\u00b1 9.39|\\n|Oracle DRO | 67.71 \\u00b1 3.03|\\n\\nFirst, we confirm that P-DRO yields more robust models than its non-parametric counterpart. Second, this further confirms the effectiveness of our proposed Minmax validation criterion, which also significantly improves the results of the non-parametric model.\\n\\nWe are currently working towards adding these additional baseline results throughout the paper.\"}",
"{\"title\": \"Modeling the Second Player in Distributionally Robust Optimization\", \"review\": \"Good points\\n----\\n- The objective of the paper is sound: fight distributional shift in systems whose predictions\\nmight have life-changing consequences (e.g data bias toxicity prediction models, etc.).\\n- The paper is well-written and easy to follow.\\n\\nBad points\\n----\\n- I don't see just how this model is \\\"parametric\\\". In statistics, \\\"parametric\\\" the adversarial\\ndistribution is modeled as a gaussian, etc. with sought-for parameters (mean, covariance, etc.).\\nIn the absence of that, I would have expected \\\"parametric\\\" to mean parametrizing the adversarial\\ndistribution as the (softmax) output of a neural network. Neither of the above is the case in \\nthis paper. So, what are the \\\"parameters\\\" in the proposed DRO adversary ? All I can see is that\\nthe authors do a full search over all distributions, subject to a KL constraint (see sections\\n2 and 3.2).\\nThere is nothing \\\"parametric\\\" about this.\\n- The authors say \\\"In particular, direct gradient descent on the uncertainty set suffers from\\ninstability due to the large variance of the gradients (Greensmith et al., 2004), and\\nhyper-parameter selection is not straightforward.\\\" I'm not sure about this claim (which\\nis one of the main premises of the manuscript. What do the authors make of this paper\\nfor example Faury et al. (AAAI 2020) \\\"Distributionally Robust Counterfactual Risk Minimization\\\" ?\\nThe authors of that paper demonstrate how to efficiently formulate and solve KL-based DRO\\nproblems. That paper also contains both theoretical and practical insights.\\n- The technical contribution of the paper is negligible (if any).\\n- The arguments in the paper very heuristic.\\n- Since the paper is supposed to be empirical (see previous points), I would have expected\\nexperiments on real datasets.\\n\\n\\nErrors\\n---\\n- Change \\\"solve the inner-max efficient\\\" to \\\"solve the inner maximization problem efficiently\\\"\\n- Change \\\"$x, y ~$ \\\" to \\\"$(x,y) ~ $\\\" all through the manuscript\\n- Eqn (5): why not take $p$ and $q_{\\\\psi_0}$ to equal the empirical distribution (as is usually\\ndone) in DRO ?\\n- In eqn defining $q_{\\\\psi_0}$, replace $\\\\arg\\\\max_{q_\\\\psi}$ with $\\\\arg\\\\max_\\\\psi$\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The paper proposes a novel and important DRO method, and good experiments are conducted to evaluate the efficacy of the proposed method.\", \"review\": \"The paper proposes to define the uncertainty set in the DRO problem as a family of parametric generative models, which is to allow more flexibility in the choice of the uncertainty set architecture. To realize this idea, the paper first proposes a new relaxation of the DRO game's inner maximization problem (with KL constraints) so as to improve the training stability. It then develops a principled approach to select the hyper-parameters of the proposed method.\", \"strengths\": [\"The paper is well-written.\", \"The proposed method is novel and important for the DRO community.\", \"Experiments with real-world problems are conducted to evaluate the effectiveness of the proposed method. I particularly like the experimental analysis the authors conducted to understand the behavior of their proposed method.\"], \"weaknesses\": [\"The experiments are only on NLP tasks.\"], \"i_have_few_questions_to_the_authors\": \"1) How good the adversary model needs to be for the proposed method to perform well? In the experiments, an auto-regressive transformer model based on the GPT-2 language model is employed. What is the accuracy of this model on the train dataset of the DRO problem? Will the proposed method performance be too sensitive to the accuracy of the adversary model?\\n2) In the experiment (last paragraph of Section 5.1), the temperature \\\\tau and the normalizing window k are fixed whilst the adversary learning rate \\\\lambda is searched by grid-search. So how \\\\tau and k are selected in practice? What is the performance of the proposed method when \\\\tau and k vary?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Two key issues left unaddressed\", \"review\": \"TL;DR: The paper makes an interesting contribution from a practical point of view, but two important theoretical concerns need to be addressed in the rebuttal for acceptance.\\n\\nThe paper proposes the use of ideas taken from the literature on distributionally robust optimization within a parametric framework. More precisely, the main idea is to consider only a (parameterised) subset of the traditional KLD-uncertainty sets. As this avoids the need for elegant analytic solutions (at the expense of a more brute force computation), it has the flavour of a more \\u2018black box\\u2019 approach towards the deployment of DRO.\\n\\nOverall, I really enjoyed the way this paper was written. Purpose and use of the contributions are clear throughout, and the reader is drawn in. I also liked the contribution and believe that the paper demonstrated its ideas to be useful. There are however two points of major concern from a more theoretical side. In my mind, these are rather substantial, and I will list them below. To recommend that the paper be accepted, these points will have to be addressed in a future version of the paper:\\n\\n(1) How do you ensure that the KLD between $q_{\\\\psi}$ and $p$ is finite? p clearly is the empirical measure (as is emphasised e.g. just above eq. (5)), but $q_{\\\\psi}$ will be continuous. This means that the KLD between the two distributions is not defined/infinity for any value of \\\\psi (Mismatch of support problem). These kind of problems are the precise reasons why other quasi-distances (like the Wasserstein distances) have become increasingly interesting for ML. As far as I can tell, this problem is not elaborated upon anywhere in the paper. \\n\\n(2) It is totally unclear to me why it should be viable to suddenly flip the direction of the KLD. The KLD is not symmetric and in general will not even have the same minimum. In fact, generally speaking the only time the minimum will be the same in either direction is when the KLD\\u2019s global minimum is such that $q_{\\\\psi} = q_{\\\\tau, \\\\theta}$ (i.e. we can drop the KLD term for the loss in (7) completely, so that it simply equals $C$). Given the definition of $q_{\\\\tau, \\\\theta}$, it is unreasonable to assume that this global minimum is attained. This makes the flipping of the KLD\\u2019s direction questionable at best. Calling the outcome an \\u2018approximation\\u2019 is then grossly inaccurate. (See e.g. the visualisations here: https://wiseodd.github.io/techblog/2016/12/21/forward-reverse-kl/)\\n\\nLastly, since the chief concern of the paper is the construction of new uncertainty sets, I would have liked to see two additional recent references discussed which have produced uncertainty sets purely based on moments (https://arxiv.org/abs/2007.04458, ICML 2020) as well as on general IPMs (https://arxiv.org/abs/2006.04349, NeurIPs 2020). Both these types of uncertainty sets do *not* suffer from the mismatch of support problem, and\\u2014like the famous f-divergence based uncertainty sets\\u2014have elegant dual forms.\", \"post_discussion\": \"The authors promised to clarify the two issues I pointed out in ways that are satisfactory for a paper whose main concern is practicality (as opposed to theoretical rigour). I will thus raise my score to a weak accept.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Recommendation to Accept\", \"review\": \"This paper considers distributionally robust optimization (DRO) and uses the neural generative models to characterize the uncertainty sets. To tackle the optimization challenges, several implementation tricks are incorporated to solve the minimax problem. The proposed robust method is validated on NLP tasks.\\n\\nThis paper is well-written and of a good structure. Although the main idea is simple, the authors make several modifications to the algorithm to make it tractable and with performance guaranteed heuristically. To summarize, the main contribution of this paper is a new algorithm that combines standard techniques, such as Lagrangian relaxation and KL reverse, into the DRO problem with KL uncertainty sets. And this algorithm was shown to perform well under synthetic and real-data NLP tasks. Since there is no novel techniques proposed in this paper and there is no performance guarantee for the proposed framework, overall, I think this is a borderline paper due to its limitations in theoretical development and technical novelty. \\n\\nMoreover, if the main focus of this paper is on developing a new computational framework that can lead to more robust results, then the authors should compare with more benchmark methods, while I only see the comparison with ERM, Topic-CVaR, etc. For example, I am wondering is it applicable to compare with Wasserstein DRO or Huber's classical work of Total variation based DRO, or some other DRO works in the literature, so that it will be more convincing on the performance of the proposed method.\", \"a_minor_typo_in_the_paper\": \"in section 6, there is a duplicated \\\"produce\\\" in the sentence: \\\"In such cases where good quality generative models are unavailable, or such model cannot produce produce densities efficiently\\\".\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
JFKR3WqwyXR | Neural Jump Ordinary Differential Equations: Consistent Continuous-Time Prediction and Filtering | [
"Calypso Herrera",
"Florian Krach",
"Josef Teichmann"
] | Combinations of neural ODEs with recurrent neural networks (RNN), like GRU-ODE-Bayes or ODE-RNN are well suited to model irregularly observed time series. While those models outperform existing discrete-time approaches, no theoretical guarantees for their predictive capabilities are available. Assuming that the irregularly-sampled time series data originates from a continuous stochastic process, the $L^2$-optimal online prediction is the conditional expectation given the currently available information. We introduce the Neural Jump ODE (NJ-ODE) that provides a data-driven approach to learn, continuously in time, the conditional expectation of a stochastic process. Our approach models the conditional expectation between two observations with a neural ODE and jumps whenever a new observation is made. We define a novel training framework, which allows us to prove theoretical guarantees for the first time. In particular, we show that the output of our model converges to the $L^2$-optimal prediction. This can be interpreted as solution to a special filtering problem. We provide experiments showing that the theoretical results also hold empirically. Moreover, we experimentally show that our model outperforms the baselines in more complex learning tasks and give comparisons on real-world datasets. | [
"Neural ODE",
"conditional expectation",
"irregular-observed data modelling"
] | Accept (Poster) | https://openreview.net/pdf?id=JFKR3WqwyXR | https://openreview.net/forum?id=JFKR3WqwyXR | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"h_CkLcTos8Y",
"LSJkxNXE-aH",
"2ZK9MUQL2n",
"HuFavHWfPAf",
"RsZC-znM1y8",
"w_wbyanGIiR",
"GY5WBy_j4TQ",
"ysRFjqMTCqT",
"Tg5NAlEIpUn",
"zNU-YJQKMzU",
"HvZIss6WPIE",
"S-eg0sehh4",
"9G-gmwicrfq",
"t1bNDglOq2Z",
"LANJaI_15K0"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1615885604017,
1610040359883,
1606300907375,
1606300487875,
1606268237467,
1606227930472,
1606135719950,
1605714474474,
1605714177144,
1605713932602,
1605713177990,
1604266228060,
1604019183826,
1603924241712,
1603865538017
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Paper3695/Authors"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3695/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3695/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3695/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3695/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3695/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3695/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3695/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3695/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3695/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3695/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3695/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3695/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3695/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Thank you\", \"comment\": \"We thank you and all the reviewers for all the valuable feedback that contributed to considerably improving the paper.\\nWe changed the title of the paper based on your suggestion, thank you.\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a refinement, and analysis of, continuous-time inference schemes.\\n\\nThis paper got in-depth criticism from some very thoughtful and expert reviewers, and the authors seem to have taken it to heart. I'm still worried about the similarity to GRU-ODE-Bayes, but I feel that the clarifications to the general theory of continuous-time belief updates is a worthy contribution, and the method proposed is a practical one. One reviewer didn't update their score, but the other reviewers put a lot of thought into the discussion and also raised their scores.\\n\\nI do think the title and name of the method is a bit misleading - I would call it something like \\\"Consistent continuous-time filtering\\\", because the jump ODE is really describing beliefs about an SDE.\"}",
"{\"title\": \"Answers to new comments 2/2\", \"comment\": \"__3. Theory__\\nWe completely agree with you that those methods that provably attain a global optimum are not the ones used in practice. We would like to emphasize that the convergence guarantees we provide are independent of the choice of the optimization method used to find the optimal weights. By citing those algorithms with provably convergence, we only wanted to mention that if you really wanted to have theoretical convergence from A to Z, you could get it.\\n\\nWe are fully aware that the study of global convergence of the stochastic gradient descent based methods with non convex objective functions and more specifically neural networks is an active research topic with multiple open questions. Therefore, it is natural not to try to solve this hard and independent problem here. However, even if a proper convergence study of this part is not available, it does not mean that the rest of the convergence study is not relevant for justifying the modeling part. Let us recall our main theorem.\\n\\n(1) We assume that for each number of sampling and for every size of the neural networks, their weights are chosen optimally, as to minimize the loss function. (2) Then, if the number of paths and the size of the neural networks tend to infinity, the output of our model converges in mean ($L^1$-convergence) to the conditional exception of the stochastic process $X$ given the current information.\\n\\nThis theorem proves that our algorithm with our specific loss function converges to the target process (the conditional expectation of the stochastic process). We want to emphasize that assuming to have the optimal weights (with respect to the loss function) does not immediately imply that the output of the model will give the desired result. For example, when we first started to construct our loss function, the convergence to conditional expectation was not guaranteed. We had to change the loss function in order to guarantee this convergence. For example, in GRU-ODE-Bayes, this type of convergence is not given, meaning that they cannot claim that the output of their model will always give the desired result. For instance, in an additional empirical example we provided (Heston model), their algorithm cannot manage to reach the desired result and produces worse outputs when the size of the network is increased. For this reason, we think the theory is particularly relevant for justifying the modeling part, even if we assume to have found the optimal weights. \\n\\nConcerning the theoretical argument, while we don\\u2019t introduce new tools but rather make use of several existing mathematical tools (Probability theory, Stochastic Calculus and the universal theorem of approximation), we provide a new proof showing that the output of the model converges to the desired result. We are confident that this proof can inspire further research to derive similar theoretical guarantees. For instance, one could be inspired by this proof to show convergence of the ODE-RNN or GRU-ODE-Bayes, or to modify them in order to guarantee convergence. This proof might be adjusted for other similar problems. Therefore, we think that the given theoretical argument can be beneficial to related subfields.\\n\\nWe thank you for initiating this discussion about the need of theoretical guarantees. \\n\\n__Summary__ \\nWe hope this addresses all of the reviewer's concerns. We thank you again for the interesting discussion and valuable feedback.\"}",
"{\"title\": \"Answers to new comments 1/2\", \"comment\": \"We thank you for taking the time to study the updated paper and the answers we provided. We also thank you for your new valuable remarks and concerns which contributed to further improvements of the paper.\\n\\n__1. Algorithm__ \\nYou are completely right that, from a theoretical point of view, the inner loop of the algorithm could be expressed by a single ODE, where continuously-in-time the outputNN is applied to the latent variable $h_t$ producing $y_t$ for all $t_i < t < t_{i+1}$. Actually, this is what we describe in equations (30) and (31) in the appendix. However, if we want to give an implementable pseudo-algorithm, this continuous-in-time procedure needs to be discretized. Since the outputs between observation times are important, we wanted to show how this discretization can be done. We think that it gives additional insight for the practical point of view, which was not provided by GRU-ODE-Bayes and ODE-RNN.\\n\\nYou are correct about the high-level framework, which is the same as in GRU-ODE-Bayes and ODE-RNN. To establish the connection to this, we added a description similar to the ODE-RNN description (new equation (7)), and clarified that the pseudo-algorithm is an implementable version of it. Thank you for pointing this out.\\n\\nYour description of the GRU-ODE-Bayes architecture and the difference to NJ-ODE is completely correct. As we described in the paper in the paragraph \\u201cGRU-ODE-Bayes\\u201d after equation (6), this architecture can be understood as a special case of the ODE-RNN architecture. In particular, a GRU is used for the RNN and a continuous version of the GRU for the neural ODE. Therefore, we think it is enough to explain the difference between our model architecture and the ODE-RNN architecture. Thank you for pointing this out, we clarified that in the paragraph where the GRU-ODE-Bayes is presented. \\n\\n__2. Loss function__ \\nYou are right that there exist similarities between the loss function of GRU-ODE-Bayes and NJ-ODE. This is not surprising, since both models try to control the jumps and also the behaviour between the jumps, which is done by the respective parts of the losses.\\nTo respond to your question, apart from theoretical convergence reasons, there is no other motivation for our choice of loss function. Those choices were completely guided by the convergence proof. We did not try to change specific things as you suggested (combination of different loss functions) for the following reason. \\n\\nTo apply a negative log-likelihood loss function (pre-loss) and a loss function based on the KL-divergence (post-loss), an assumption on the conditional distribution is needed. In particular, GRU-ODE-Bayes makes the assumption that the conditional distribution is gaussian and models its time-dependent parameters. This assumption is restrictive, if the true underlying conditional distribution can not be described by a gaussian distribution (as for example in the case of the Heston dataset). We go another way, where we do not make any assumption on the underlying distribution. This implies that we cannot use a loss term where such an assumption would be needed.\"}",
"{\"title\": \"new comments\", \"comment\": \"I genuinely thank the authors for putting in the effort to try address my concerns. The draft is now more appealing to the average machine learner to which this conference targets.\\n\\nAfter carefully pondering specific points, I still have many concerns, which I believe should be carefully addressed:\\n\\n1. After reading Algorithm 1 for multiple times, I cannot see the motivation behind fixing a step size \\\\delta t. It seems the inner loop can be rewritten as a single ODE solve, with the the output y_t being the result of applying the outputNN with the corresponding h_t. The overall model can thus be described as \\\"the state follows an ODEs in between observations, and is applied a jump (defined by an NN at observations)\\\". This is basically the high-level framework of both GRU-ODE-Bayes and ODE-RNN. \\n\\nTake GRU-ODE-Bayes as example, the ODE defining the state's evolution in between observations is the continuous-time version of a gated recurrent unit, while the jump update is the gated recurrent unit's update. \\n\\nIf this accurate, then in my opinion the difference between the proposed NJ-ODE and GRU-ODE-Bayes is that 1) NJ-ODE uses a general NN to define the ODE between observations, as opposed to the cont. time version of GRU, and 2) uses a general NN to define jump updates at observations, as opposed to the GRU update. \\n\\nWhile the authors have mentioned this distinction vs ODE-RNNs, I believe the authors should also include a description comparing to GRU-ODE-Bayes. \\n\\n2. After carefully reading eq (7), I sense a striking resemblance compared to the loss in the GRU-ODE-Bayes paper. Loosely speaking, the \\\"jump part at observation\\\" can be mapped to the \\\"preLoss\\\" in GRU-ODE-Bayes (assuming observation), with the caveat that in the GRU-ODE-Bayes paper, the predictive mean is based on the hidden state prior to jump. The \\\"continuous part between two observations\\\" can be mapped to the \\\"postLoss\\\" in GRU-ODE-Bayes, with the caveat that p_obs needs to be ignored there (so that the KL between Gaussians is the norm of mean scaled by a function of the shared variance). \\n\\nApart from theoretical convergence reasons, what are the motivations in making these changes? How does changing specific things (e.g. use pre-jump vs post-jump update for preloss) affect results in practice? What happens if only one aspect of the overall loss is changed? Are the differences significant? \\n\\n3. I thank the authors for attempting to address my concern regarding the theory. \\n\\nWhile I agree that it is possible to attain global convergence with sim. annealing, I shall emphasize that this is definitely not what we typically use in practice. While there are also special architectures/training paradigms (e.g. NTK) that guarantee global convergence, there are also ample evidence that these architectures and paradigms differ from what's happening in practice when SGD is used to train an NN. For this reason, I don't think the theory is particularly relevant for justifying the modeling part. \\n\\nThe theoretical argument, while being a new statement, relies mostly on standard results to prove and does not introduce new math/stats techniques, and therefore is unlikely to benefit theoretician working in related subfields.\"}",
"{\"title\": \"Thank you for the revaluation\", \"comment\": \"We thank you for having studied our updated paper and for updating your score. We are glad that you are satisfied with the new version.\"}",
"{\"title\": \"post rebuttal evaluation\", \"comment\": \"Following https://openreview.net/forum?id=JFKR3WqwyXR¬eId=akFNuozOZ1p\\nI have updated my rating.\\n\\nConcerns on clarity and motivation have been addressed properly\"}",
"{\"title\": \"Simplified paper, clarification of contribution and comparison to latent ODE\", \"comment\": \"Thank you for your review.\\n\\n__Simplify the paper__ \\nWe have simplified the main paper a lot, by moving the precise mathematical definition of concepts and the theorems to the appendix. The paper is now written in a similar way as other papers cited, in order to reach a bigger audience of the Machine Learning community.\\n\\n__Section on optimal approximation is vague__ \\nYou are completely right that this section (but also the corresponding references in the abstract and introduction) was vague and that a different norm would yield different optimizers. Therefore, we put more emphasis on clarifying that we consider the minimization problem with respect to the $L^2$-norm throughout the paper.\\n\\n__Novelty__ \\nOur model is different from the previous work. In the ODE-RNN and in the GRU-ODE-Bayes, it is a recurrent neural network, where between two observations, the hidden state is modeled by a neural ODE. In our model, we don\\u2019t use any recurrent neural network. This makes the model much easier and faster to train. We instead have a neural ODE that takes three more parameters (the last observation, the current time and the duration between the current time and the last observation). Moreover, we want to emphasize that we provided a novel training framework. For the first time, we provided a mathematical formulation, a rigorously defined problem statement and based on the new objective function, the theoretical guarantees that our algorithm works, which would not have been possible with a different training framework. In the GRU-ODE-Bayes paper, the authors do not give any theoretical guarantees for the model, only an empirical study. In contrast, our method is proven to converge to the optimal solution.\\n\\n__Added comparison with latent ODE__ \\nIn our paper we consider exactly the same task as the GRU-ODE-Bayes paper. The latent ODE paper, although being very similar, does not consider the same task. In particular, the latent ODE (as it is) can not be used for online forecasting. This is reflected in the way it is applied to the extrapolation task, where the model is trained in a supervised learning setting, mapping the first half (input) of the time series to the second half (target). Compared to this, our (and GRU-ODE-Bayes) approach can be interpreted as unsupervised learning. In contrast to the latent ODE, the ODE-RNN might be used for the online forecasting task, but the authors emphasize (in their paper and their official implementation of the models) that ODE-RNN should be used for interpolation tasks only.\\nThis is the reason why we did not compare our method to the latent ODE in the first place. \\nAlthough our approach is different from latent ODE\\u2019s approach, their extrapolation task is one that can be tackled by both models. We have added an experiment to the paper, where we apply our model in the exact same setting as the latent ODE for the extrapolation task on physionet. Our model achieves a performance of $1.945 \\\\pm 0.007$ ($\\\\times 10^{-3}$) and outperforms the latent ODE with a reported performance of $2.208 \\\\pm 0.050$ ($\\\\times 10^{-3}$).\\n\\n__Assumption that ERM can be found in convergence results__ \\nWe deliberately constructed the loss function in such a way that convergence can be proven, therefore, it is not surprising that our results are expected. However, it is the first time that such a proof was provided for the class of neural ODE based models.\\nWe have added a paragraph after the (informal) theorem outlining that the assumption that the ERM can already be found is not restrictive. There exist global optimization methods, as for example simulated annealing, that provably converge to a global optimum in probability. Apart from that, several works try to show that most local optima of neural networks are nearly globally optimal. This implies that in practice, using standard stochastic gradient descent methods which converge to local minima, will supply nearly optimal weights that should be good enough for the approximation.\\nSince the proofs of our theorems depend on the universal approximation theorem, which does not provide convergence rates, we also cannot derive convergence rates from our analysis.\\n\\n__Summary__ \\nWe hope we have addressed every concern that the reviewer has raised. We would be very happy to have further discussion if there are any other obstacles to raising the review score.\"}",
"{\"title\": \"Clarification of contribution over existing work and additional experiments\", \"comment\": \"Thank you for your review.\\n\\n__Show in a clearer manner__ \\nWe have rewritten the main paper and all the rigorous and precise mathematical formulations are moved in the appendix to let place for the important message we want to transmit. It is now written in a very simple way, in order to reach a bigger audience. Thank you for this input.\\n\\n\\n\\n__Contribution over existing work__ \\nOur model is different from the previous work. In the ODE-RNN and in the GRU-ODE-Bayes, it is a recurrent neural network, where between two observations, the hidden state is modeled by a neural ODE. In our model, we don\\u2019t use any recurrent neural network. This makes the model much easier and faster to train. We instead have a neural ODE that takes three more parameters (the last observation, the current time and the duration between the current time and the last observation). Moreover, we want to emphasize that we provided a novel training framework. For the first time, we provided a mathematical formulation, a rigorously defined problem statement and based on the new objective function, the theoretical guarantees that our algorithm works, which would not have been possible with a different training framework. In the GRU-ODE-Bayes paper, the authors do not give any theoretical guarantees for the model, only an empirical study. In contrast, our method is proven to converge to the optimal solution.\\n\\n\\n__Additional experimental validation__ \\nThe main contribution of our project is the theoretical justification and the rigorously defined framework with the theoretical guarantees of convergence. This was not done by any paper prior to our work. For that reason, we think that having tested our model on those three synthetic datasets and on a real world dataset in addition to our theoretical guarantees was suffisant to prove that our algorithm works well in different scenarios.\\nHowever, we have improved our experiment validation by adding the following experiments:\\n- Heston model without the Feller condition. \\n- Switching regime. In the first half of the path, the stochastic process is following a model M1 and in the second half of the path a model M2. \\n- Model with explicit time dependence, i.e. where the drift of the SDE depends on t.\\n- Convergence study also on Ornstein-Uhlenbeck and Heston dataset.\\n- Experiments on Physionet in the same setting as the extrapolation experiment of the latent ODE (ODE-RNN), adding another comparison to a baseline model.\\n\\n__Different task than in previous literature__ \\nActually, the task of predicting the conditional expectation is not new. It is different from the tasks considered in the latent ODE, but very similar to what the GRU-ODE-Bayes does. They also estimate the conditional expectation together with the standard deviation under normality assumption. More precisely, they try to predict the conditional distribution, under the assumption that it is given by a normal distribution. The predicted mean parameter of the normal distribution therefore is exactly the conditional expectation, which is the main interest in all real world forecasting applications. This is the reason why we mainly compared our method to GRU-ODE-Bayes.\\n\\n__Summary__ \\nWe hope we have addressed every concern that the reviewer has raised. We would be very happy to have further discussion if there are any other obstacles to raising the review score.\"}",
"{\"title\": \"Clarification of theorems, terminology and other remarks\", \"comment\": \"Thank you for your review.\\n\\n__Some unclarities in conclusion of Theorems__ \\nThere was a misunderstanding, we thank you for pointing this out. We try to clarify the points. The conditional expectation is the $L^2$-minimizer for the considered forecasting task. However, we can show convergence of the output of our model to the conditional expectation only with respect to the $L^1$-norm (additional assumptions would be needed for convergence in $L^2$-norm). It is important to differentiate here between \\nthe 2-norm that is used to make the d_X-dimensional random variables 1-dimensional inside the expectations and \\nthe $L^1$-norm used to show $L^1$-convergence to 0 for this 1-dimensional random variable. \\nSince the 2-norm is equivalent to any other norm on $\\\\mathbb{R}^{d_X}$, this choice does not influence the result in any way. \\nAs correctly remarked by the reviewer, convergence in $L^1$ does not imply almost sure convergence. However, we did not claim almost sure convergence, but only that the limits are equal almost surely, which is a direct consequence of $L^1$-convergence. To make our claim clearer, we added Lemma E.6 stating this consequence and reference to it.\\n\\n__Sloppy use of notation and terminology__ \\nWe agree that the terminology wasn\\u2019t used appropriately at the outlined points and thank him for bringing this to our attention. We changed the passages to be more precise and appropriate. \\n\\n__Irregular sampling procedure & sampling process as point process__ \\nWe are not sure to correctly understand the remark. The irregular observation dates are needed to describe data that is observed at irregular times. In particular, we do not take the point of view that we have a model of which we can sample as often as we want. Instead, we try to give a mathematical description for data that is irregularly observed at random time points. We changed the terminology in the paper the better explain our point of view. We tried to keep the definition of the irregular observation times very general, under the assumption that the observation times are independent of the stochastic process. \\nThe reviewer is right that point processes on the same probability space are a way to define observation dates that might be correlated with the stochastic process. We had the impression that the given way of defining the observation dates make them easier to understand, even though the product probability space consequently has to be considered. As suggested, examples for randomized sampling processes were added to the paper, including the suggested Poisson point process.\\nWe do not analyse the bias of our algorithm, but only show that it is consistent. It is clear that the sampling procedure can introduce a bias to an estimator. An extreme case would be, that the time interval is divided in half and observations are only made on the first half, for which any estimator could hardly learn anything about the second half. However, this is already incorporated in our convergence results through the dependence on the probability measures $\\\\lambda_k$. \\n\\n__Is__ $\\\\hat{X}_t$ __a stochastic process?__ \\nYes, in contrast to a Gaussian process, in its basic definition, a stochastic process is just a collection of random variables indexed by some index set (Wikipedia). Often this set is the time interval, as for example in the definition in [1, page 3 after Theorem 1]. In particular, a stochastic process is defined pointwise at each $t$. Therefore, this definition applies to $\\\\hat{X}_t$, which is defined pointwise. \\n[1] P. Protter. Stochastic integration and differential equations. 2005.\\n\\n__About further minor comments__\\n1. We do not understand why the term \\u201cobservation epochs\\u201d would be suited better. In particular, observations are always made only at discrete time points, rather than on time intervals (which would correspond to \\u201cepochs\\u201d from our point of view). Maybe we misinterpreted this comment?\\n2. We changed it, thanks for pointing this out.\\n3. The definition of $\\\\mathcal{B}([0,T])$ was added, thanks.\\n4. We made a small change to the notation. We are not sure which notation would be better or less confusing for $\\\\tilde{\\\\mu}$. Would you have any suggestions?\\n5. Correct, we changed it, thanks for pointing out.\\n6. What we mean here is that we take an average base only on one realization of the path, by averaging over the different observations in time rather than by averaging over multiple realizations of the paths. Such a time-average only equals the sample average under ergodicity assumptions, which we assume to be satisfied here. We do not explicitly use an ergodic theorem, but suppose that the claim of such a theorem is satisfied in the stated way.\\n7. The definition was added to the paper (now in appendix).\\n\\n__Summary__ \\nWe hope this addresses all of the reviewer's concerns. If the reviewer has any further questions by which our paper and their score may be improved, then we would be happy to address these as well.\"}",
"{\"title\": \"Simplified paper and clarification of our contribution\", \"comment\": \"Thank you for the review.\\n\\n__Difficult read__ \\nWe simplified the paper a lot. Everything is now described with simple words and there are no more mathematical technicalities in the main paper. The rigorous problem statement, the mathematical description and all the theoretical guarantees are moved into the appendix. To clarify even more the paper, we describe the previous methods and we explain how we built our model in the main paper as you suggested. \\n\\nWe believe that it is important to have a solid and rigorous mathematical explanation of those recent techniques and we think that this is a significant contribution for the machine learning community. However, we completely agree with you that it can be explained in a better and simpler way, such that a bigger audience can understand and take advantage of our contribution. This is what we tried to do and we hope that you will appreciate the current version. Thank you for taking the time to go through it. We also changed the formatting of the references as you suggested.\\n\\n__On choice of venue__ \\nWe have submitted to ICLR because our paper is about improving an already-existing machine learning technique. \\n\\n__Novelty__ \\nOur model is different from the previous work. In the ODE-RNN and in the GRU-ODE-Bayes, it is a recurrent neural network, where between two observations, the hidden state is modeled by a neural ODE. In our model, we don\\u2019t use any recurrent neural network. This makes the model much easier and faster to train. We instead have a neural ODE that takes three more parameters (the last observation, the current time and the duration between the current time and the last observation). Moreover, we want to emphasize that we provided a novel training framework. For the first time, we provided a mathematical formulation, a rigorously defined problem statement and based on the new objective function, the theoretical guarantees that our algorithm works, which would not have been possible with a different training framework. In the GRU-ODE-Bayes paper, the authors do not give any theoretical guarantees for the model, only an empirical study. In contrast, our method is proven to converge to the optimal solution.\\n\\n__Summary__ \\nWe hope we have addressed every concern that the reviewer has raised. We would be very happy to have further discussion if there are any other obstacles to raising the review score.\"}",
"{\"title\": \"Not very novel method, difficult read.\", \"review\": \"##########################\\nThe paper proposes an algorithm and an analysis of its convergence.\\nThe algorithm propose to learn a model of temporal data y_1 , ... , y_T given input x_1, ..., x_T\\nThe observations are assumed to arise from a deterministic latent h \\ngoverned by a piecewise continuous ode (in between consecutive times t_i, t_i+1)\\nwith additional deterministic jumps at transitions.\\n\\nIn the ODE-RNN paper, the latent h can be expressed as a single ODE for the whole time horizon (rewriting the\\njump with a skip transition).\\n\\nThis paper appears to me as taking this expression and choosing a particular bounded form for the\\ndynamics, jump and readout functions\\nA statistical asymptotic analysis of the convergence of the algorithms, for random times and inputs is given. \\n\\n##########################\", \"methodology\": \"I find the paper quite difficult to read, I blame both its structure and my lack of ease with the mathematics used here.\\nHowever from what I have understood of the algorithm proposed, I find the methodological contribution very limited.\", \"clarity\": \"I come from the machine learning community and read with no difficulty \\npapers cited in the related work section.\\nIn comparison, I find this paper extremely difficult to read and parse despite containing the same kind of information.\\n\\nAsymptotic analysis.\\nI leave to other reviewers the evaluation of the convergence analysis.\\nMy evaluation being partial, my confidence rating is set accordingly.\\n\\n* For a machine learning paper presenting in the end a 3 line simple algorithm, the paper contains\\na lot of superfluous mathematical notation that crowds the paper and make the reading very tedious.\\nMany of the papers cited Brouwer 2019, Rubanova 2019, Li 2020, offer a much smoother read in that respect.\\nAs is, this paper feels better suited to a more specialist statistics venue.\\n\\n* For example, many elements are introduced in the main text and are not really necessary to understand what the paper does\\nThe detailed section on random inputs is used only in a theorem coming later, why have it in the main text in so much details.\\nOn the other end, a description of the method this paper builds on is left into appendices.\\n\\n#########################\", \"additional_comments\": [\"the formatting of the references is very inconsistent, please update\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Review of neural jump ODEs\", \"review\": \"Summary: This paper introduces Neural Jump Ordinary Differential Equations as a method for learning models of continuous-time stochastic processes sampled at random time epochs. Specifically, the paper studies the problem of estimating the marginal conditional expectation (i.e., the L2 optimal approximation conditional on the available information) by estimating an auxiliary stochastic differential equation, parameterized by neural networks, that approximates the conditional expectation of the process of interest at each point in time. The neural networks are trained by using a \\u201crandomized\\u201d mean squared-loss objective. The main theoretical results in the paper include asymptotic consistency of the optimal objective value in the limit of a large neural network, as well as consistency of a Monte Carlo sample average estimator of the value. The paper also establishes the L2 convergence of the estimated auxiliary solution to the marginal conditional expectation.\\n\\nThe technical details in the paper are mostly sound, and I believe it should be of interest to a wide community. The question of estimating stochastic models sampled at regular or irregular intervals is of broad utility. There are some technical issues however, but these can be resolved I believe. In particular, in the conclusions of Theorem 4.1 and 4.2, it seems as though the authors claim almost sure convergence, unless I am misunderstanding their statement. What the authors establish is convergence in L2, but why does is imply almost sure convergence? Wouldn\\u2019t one require uniform integrability to conclude more? Furthermore, this is not a process level convergence result, and therefore I do not believe that they can conclude (as in Remark G.3) that the limit holds almost surely. (Also, the authors seem to suggest tin Remark G.2 that they\\u2019re not establishing L2 convergence, but this could be a problem with the writing). \\n\\nComing to the writing, I note that I did find the paper somewhat sloppy in its use of terminology and notation. For instance on p.1 the authors state \\u201c...while stochastic processes are continuous in time...\\u201d This is not quite true, since one can define discrete-time stochastic processes. I also found the discussion around justifying \\u201cirregular\\u201d sampling of the stochastic process to be poorly written. In particular, it is stated that \\u201c...dividing the time-line into equally-sized intervals...is again making assumptions and information is lost...\\u201d well, any sampling will involve a loss of information, and the randomized sampling process described in this paper also involves assumptions. I don\\u2019t think this comment is appropriate. Furthermore, the authors do not make a clear case for why their irregular sampling procedure is appropriate. I\\u2019m quite certain that the sampling process introduces bias into the estimation; for instance, Theorem 1 of ref. [1] below provides sufficient conditions under which an \\u201cirregularly\\u201d sampled estimator of a functional of an SDE is unbiased. The authors must do a better job of justifying their method. I would also urge them to add an example of a randomized sampling process; for instance, a Poisson process sampler would satisfy their definition, in which case the sampling time epochs form an ordered statistic.\\n\\nComing to the development of the stochastic model, it is unclear to me as to why all of the random \\u201cobjects\\u201d cannot be defined on the same sample space. Essentially, couldn\\u2019t one view the sampling process as a point process on the same sample space supporting the SDE? \\n\\nNext, in Prop. 2.1, the authors state that the optimal adapted process approximating process is \\\\hat{X}_t \\u2014 but \\\\hat{X_t} is only defined pointwise (i.e., at each time \\u2018t\\u2019) and it is not defined as a stochastic process. Indeed, for that the authors must describe the finite dimensional distributions for all finite sets of time epochs to define the stochastic process. I believe it is inappropriate to call this a stochastic process. This doesn\\u2019t affect the main results, since the authors only establish convergence in an L2 sense, where the full distribution is not necessary.\", \"some_further_minor_comments\": \"1. Change the term \\u201cobservation dates\\u201d to \\u201cobservation epochs\\u201d.\\n2. Change \\u201camount of observations\\u201d to \\u201cnumber of observations\\u201d (or samples).\\n3. On P.3 in the definition of \\\\lambda_k, the set \\\\mathcal{B}([0,T]) is undefined.\\n4. The notation defining the function \\\\tilde{\\\\mu} is very confusing, please change.\\n5. P.4 \\u201c...since the variation of u...\\u201d should be \\u201c...since the total variation of u...\\u201d\\n6. What do you mean by \\u201cergodic\\u201d approximation of the objective? Isn\\u2019t it simply a sample average approximation? Which ergodic theorem is playing a role here?\\n7. I would also urge you clearly define what you mean by \\\\mathbb{L}-convergence, for completion. \\n\\n[1] Unbiased Estimation with Square Root Convergence for SDE Models, Chang-Han Rhee and Peter W. Glynn.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Contribution over existing work is unclear, experimental validation is minimal.\", \"review\": \"The authors propose a method for learning the conditional expectation of stochastic process in an online fashion. The paper bears a considerable theoretical treatment, derived from the stochastic filtering literature, which is present both in the main body of the paper and the appendix. Besides the model, the paper also aims to provide a theoretical justification of the convergence of their method.\\n\\nI find the contribution of the paper somewhat obscure, its aims to be incremental with respect to the previous literature, and the experimental validation heavily unconvincing. I support my recommendation through the following points: \\n\\n- Following the well known (by now) neural ODE and neural jump SDE, the contribution of the paper seems minor. The authors state that they focus on giving theoretical guarantees, however, these are specific and loosely validated experimentally\\n- There is a fair amount of space dedicate to the theoretical presentation of the background, I agree with the importance of theory, but I failed to see how that theory supports the claims of the paper. \\n-the experiments are limited: only 3 synthetic examples and only one real-world one.\\n-the authors states that their method focuses on approximating (directly) the conditional expectation, this seems to be a different with the previous literature. However, if that's the case, the authors should consider more benchmarks such as linear filters (adapted to non-uniformly-spaced data), Gaussian processes, or general time series models. \\n\\nThis paper does have a contribution. My recommendation is that the authors show it in a clearer (to the point) manner with an improved experimental validation.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #1\", \"review\": \"The submission studies a simplified model of ODE-RNN and GRU-ODE-Bayes theoretically, proves convergence results, and presents experimental results in companion with the theoretical results.\\n\\n\\nThe paper does a good job in defining concepts precisely. Though, this has come at a cost of highly complex notation, which may hinder the average researcher in the ML+diffeq community, who may not have a strong background in probability theory, to understand the paper. I would therefore recommend the authors to simplify the notation by deferring the precise mathematical definition of concepts such as information sigma algebras to the appendix. \\n\\n\\nThe section (sec. 2.4) on optimal approximation of a stochastic process in the main text is somewhat vague. Optimality certainly depends on the cost function being considered, in which case, the appendix states that the 2-norm is used here. The particular norm being used is somewhat independent of the construction of the probability space, e.g. we could consider the same prob. space and evaluate the difference between the random variable and the fixed prediction using some other function, say the metric function induced by the L1-norm). This makes terms such as \\u201cL^2(omega X omega tilde, ...)-minimizer\\u201d somewhat confusing. Note my comment here is somewhat handwavy about the precise technicalities, but it should convey the relevant idea. \\n\\n\\nMy main concern regarding the paper is about novelty. It seems that the model considered in section 3 falls broadly in line with ODE-RNN and GRU-ODE-Bayes. On the other hand, the experiments section also doesn\\u2019t compare against latent ODE, which is a strong but relevant baseline. \\n\\n\\nThe section (sec. 4) on theoretical convergence results mostly assume that the ERM can already be found. This rather strong assumption therefore leaves the theorems in that section not unexpected, and at the same time, less relevant for practitioners. It is also unclear whether convergence rates can be derived. \\n\\nThe paper does a decent job in clarifying its relationship with prior work.\", \"post_rebuttal\": [\"I thank the authors for improving the presentation of the paper and including additional experiments comparing to latent ODE.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
0O_cQfw6uEh | Gradient Origin Networks | [
"Sam Bond-Taylor",
"Chris G. Willcocks"
] | This paper proposes a new type of generative model that is able to quickly learn a latent representation without an encoder. This is achieved using empirical Bayes to calculate the expectation of the posterior, which is implemented by initialising a latent vector with zeros, then using the gradient of the log-likelihood of the data with respect to this zero vector as new latent points. The approach has similar characteristics to autoencoders, but with a simpler architecture, and is demonstrated in a variational autoencoder equivalent that permits sampling. This also allows implicit representation networks to learn a space of implicit functions without requiring a hypernetwork, retaining their representation advantages across datasets. The experiments show that the proposed method converges faster, with significantly lower reconstruction error than autoencoders, while requiring half the parameters. | [
"Deep Learning",
"Generative Models",
"Implicit Representation"
] | Accept (Poster) | https://openreview.net/pdf?id=0O_cQfw6uEh | https://openreview.net/forum?id=0O_cQfw6uEh | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"afDk9eNH48N",
"WZyMW4mzuWT",
"XQ-f34Kl7e1",
"yuNYcU5JPB_",
"L6b8kpQDDES",
"c2Tf26KFK_W",
"yiPHZgE5QR",
"cRZ97ucqYTO",
"CDj6ZkV5Ft",
"ibJJkF7_G5v",
"O9dl7y_u12v",
"80wLrgIYXMT",
"wcc22YBuryA",
"UEmjMeEUIE",
"g-QgCiC9YK9",
"OTL9r-5XmOs",
"EYWJekQyYEm",
"WVQgsB-Mi7q"
],
"note_type": [
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1616103948768,
1615893844785,
1615863577071,
1615836834581,
1615819540594,
1615804349233,
1615702519975,
1610040406793,
1606247321833,
1606223363239,
1606149850866,
1605897990807,
1605897903106,
1605897837984,
1605897657546,
1604019307433,
1603926613082,
1603200980711
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Paper3694/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3694/Authors"
],
[
"~Soochan_Lee1"
],
[
"ICLR.cc/2021/Conference/Paper3694/Authors"
],
[
"~Soochan_Lee1"
],
[
"ICLR.cc/2021/Conference/Paper3694/Authors"
],
[
"~Soochan_Lee1"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3694/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3694/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3694/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3694/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3694/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3694/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3694/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3694/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3694/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3694/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Multi-step GONs are non-linear\", \"comment\": \"We agree that single-step GONs for mean-field implementations are linear as you've helpfully shown clearly above. However, multiple-step GONs (see Figure 2b and Table 1) for this are able to give non-linear encodings (because the second step uses a $z$ that is a linear function of $x$ so now $F(z)$ is dependent on $x$). In our experiments we found that the non-linearity induced by multiple-steps didn\\u2019t provide quantitative improvement when dealing with high-dimensional datasets while it significantly increased run-time (see Figure 2b and Table 1).\\n\\nHere is a [Colab notebook](https://colab.research.google.com/gist/cwkx/1f3db3c088334fdccb24822ee280bb2a/non-linear-2-step-gon-encodings-example.ipynb) demonstrating a 2-step GON giving clear non-linear encodings on your sphere test example.\"}",
"{\"title\": \"We are not obscuring the point\", \"comment\": \"1. Using a linear encoding like this is clearly not scalable. To show our point, for our bedrooms experiment (128x128x3 compressed to 2048), your above notebook example with a linear encoder would use **100,663,296 parameters** whereas **GONs use 0**. And the qualitative results are impressive in both cases.\\n\\t- GONs are presented in a general form; we don't dispute that mean-field GONs encode linearly. Our high dimensional test cases show they outperform their non-linear AE equivalents (which are standard convolutional AE architectures).\\n\\t- As stated throughout our abstract and paper, one of the main use cases of GONs is their applications in implicit networks.\\n2. As discussed in our previous comment, MADE is an autoregressive model.\\n3. We just used MADE as a simple example to demonstrate GONs capability for your toy benchmark. Other likelihood models such as PixelVAE or MAE can be used for higher dimensional data. \\n4. In the method section of our paper we discuss modelling in broad terms of $p(x|z)$ and do not state that this has to be a mean-field model. Our experiments demonstrate the efficacy of GONs with simple independent output distributions outperforming autoencoders while using significantly fewer parameters, but GONs can also be applied to more complex decoders.\\n5. Consider an autoregressive model $p(x_1|z)p(x_2|x_1,z)\\\\cdots p(x_n|x_{<n},z)$. The gradient of $p(x_2|x_1,z)$ with respect to $z$ is dependent on $x_1$ and the gradient of $p(x_n|x_{<n},z)$ with respect to $z$ is dependent on $x_{<n}$. This is what the code we shared in our initial answer does.\", \"to_summarise_our_position\": [\"GONs are presented in a general form that supports both linear and non-linear encodings dependent on how $p(x|z)$ is modelled.\", \"Linearly encoding GONs significantly outperform autoencoders on high dimensional data with substantially fewer parameters, while converging faster and generalising better.\", \"In light of your strong views on this, we are more than happy to add a statement to our paper that the mean-field implementation of GONs equates to a linear encoding and recheck, to make sure we are consistent with such claims - this does not otherwise affect the narrative, contributions, or any part of this paper: abstract, introduction, method, results, discussion, and conclusions.\"]}",
"{\"title\": \"The authors are obscuring the point\", \"comment\": \"Unfortunately, the authors continue to make inaccurate and irrelevant claims.\", \"here_i_summarize_several_key_facts\": \"1. [This notebook](https://colab.research.google.com/drive/1EhBdvsuNHRhAtOidYu59JTyzZXfrUpxN?usp=sharing) shows that a linear encoding achieves the same reconstruction loss as GON encoding in MNIST and Fashion MNIST. It is unarguably clear, in both theoretical and empirical aspects, that GON encodes linearly.\\n\\n1. MADE has an encoder part. To be specific, the first component in their code (`MaskedLinear(nin, nhidden, num_cond_inputs, masks[0])`) is the encoder. Meanwhile, the first sentence in the abstract of this paper is, \\\"This paper proposes a new type of generative model that is able to quickly learn a latent representation **without an encoder**.\\\"\\n\\n1. The authors did not even cite MADE in their paper.\\n\\n1. Using an autoregressive decoder is out of the scope of this paper. The authors did not mention nor experiment with the idea.\\n\\n1. The authors did not show how to combine GON with an autoregressive model without an encoder. I doubt that using a series of linear models to autoregressively decode would add any nonlinearity.\\n\\n\\nI feel very sorry for the authors, but I have to suggest withdrawing this paper.\\nIt would cause a lot of confusion and waste many valuable hours of other researchers.\\nI hope the authors make a wise decision for the community.\"}",
"{\"title\": \"GONs are not mean-field models and this is not a main limitation\", \"comment\": \"When the generative function models each data component independently, the encoding function is linear, however, this is not a fundamental property of GONs since conditional dependencies can be modelled effectively. We'll make sure we make no such claims in the final camera ready version. Besides, GONs seem to excel in high dimensional data spaces as demonstrated by our strong empirical results, significantly outperforming autoencoders in real world cases, fitting better while also generalising better, even with small latent vectors despite making this simple independence assumption. This allows applications such as modelling $p(x|z)$ using an implicit network with advantages such as superesolution.\\n\\nAdditionally, MADE is not a standard autoencoder, it is an autoregressive model obtained by masking an autoencoder\\u2019s weights such that each output neuron is conditioned only on the previous values. This is different to a standard autoencoder which, unlike MADE, models the components of x independently but requires a bottleneck to prevent the identity function from being learned. Our example demonstrates this in a latent conditional model, comparable to a PixelVAE [1] or MAE [2], both of which require an encoder to compress data to latent vectors. On the other hand, our example demonstrates that GONs can achieve this without an encoder.\\n\\n[1] Gulrajani, I., Kumar, K., Ahmed, F., Taiga, A. A., Visin, F., Vazquez, D., & Courville, A. (2016). Pixelvae: A latent variable model for natural images. ICLR.\\n\\n[2] Ma, X., Zhou, C., & Hovy, E. (2019). MAE: Mutual posterior-divergence regularization for variational autoencoders. ICLR.\"}",
"{\"title\": \"There is no error in my reasoning\", \"comment\": \"Yes, GONs compute gradient w.r.t. $\\\\log p(x|z)$.\\nThis is equivalent to computing gradient w.r.t. MSE loss if $\\\\log p(x|z)$ is assumed to be a Gaussian with unit covariance.\\nThis is also explicitly expressed in Eq. (8).\\n\\nMy point is that the GON's encoding capability is severely limited (i.e., linear encoding when used with the MSE loss).\\nI clearly demonstrated this fact with the notebook that I shared earlier.\\nIf I am wrong, please point out which part of my code is incorrect.\\nAccording to my understanding, every experiment in the paper shares this limitation.\\n\\nMy code shows that GONs cannot model a simple 2D manifold with 2D latent variables, while autoencoders can.\\nThe authors' response is completely irrelevant to this issue.\\nThey argue that MADE, an acronym for Masked ***Autoencoder*** for Distribution Estimation, can be used to model the 2D manifold.\\nBut this is utterly obvious because MADE has a nonlinear encoder!\\nThe main novelty claimed by this paper is that GONs can infer latent variables without an encoder.\\nI cannot understand why the authors brought this up as a counterexample.\"}",
"{\"title\": \"GONs are not linear models\", \"comment\": \"There's an error in your reasoning, we define the latent variable computation in terms of $p(x|z)$ (Equation 7) which can be modelled such that the components of $x$ have conditional dependencies, causing $\\\\frac{\\\\partial F}{\\\\partial z}$ to be dependent on $x$. A simple counter example to your unit sphere example is to model $p(x|z)$ using a conditional MADE [1], which can easily reconstruct the data distribution with 2D latent variables, without an encoder. Here is a [Colab notebook](https://colab.research.google.com/gist/samb-t/d3a4d7d7204bda3a0c81995e654d00d4/made-gon-sphere.ipynb) demonstrating this.\\n\\nAs for the high-quality samples being just due to a large $z$ dimension, please see Figure 10c where we demonstrate GONs reconstruction ability for a variety of latent sizes. Indeed, GONs outperform autoencoders even with small latent spaces.\\n\\n[1] Germain, M., Gregor, K., Murray, I., & Larochelle, H. (2015). MADE: Masked Autoencoder for Distribution Estimation. In International Conference on Machine Learning (pp. 881-889). PMLR.\"}",
"{\"title\": \"GON is a linear model\", \"comment\": \"I think I found a critical problem with GON.\\n\\nThe latent variable $z$ is computed as follows:\\n$$\\n\\\\begin{align}\\nz\\n&= -\\\\frac{\\\\partial \\\\mathcal L^{MSE}}{\\\\partial z} \\\\bigg\\\\vert_{z=0} \\\\\\\\\\\\\\\\\\n&= - \\\\frac{\\\\partial \\\\mathcal L^{MSE}}{\\\\partial F}\\\\bigg\\\\vert_{z=0} \\\\frac{\\\\partial F}{\\\\partial z}\\\\bigg\\\\vert_{z=0} \\\\\\\\\\\\\\\\\\n&= - (x - F(\\\\mathbf 0)) \\\\underbrace{\\\\frac{\\\\partial F}{\\\\partial z} \\\\bigg\\\\vert_{z=0}}_{\\\\text{const. w.r.t. } x}\\n\\\\end{align}\\n$$\\nHere we can see that $z$ is just an affine transformation of $x$.\\nThere is no nonlinearity.\\nTherefore, we can conclude that GON is practically equivalent to a linear model.\\n\\nHere is [a Colab notebook](https://colab.research.google.com/drive/1xIY5CEUFilASnuonWChTiABAouRu5Bup?usp=sharing) that shows GON's limitation with a simple toy example.\\nNote that I extended the Colab notebook provided by the authors.\\nI created a set of random points on the surface of the 3D unit sphere, which is a 2D manifold.\\nAn MLP autoencoder can successfully reconstruct this data distribution with 2D latent variables, while GON cannot.\\n\\nThe same is true for variational GON. Adding one more linear layer to extract variational parameters does not add any nonlinearity.\\n\\nI was initially very excited by the authors' claim that we can learn a deep generative model without an encoder, but I am afraid this is not true.\\nThe high-quality examples in the experiment section would be due to a large $z$ dimension.\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper presents a new inference mechanism for latent variable models, by taking the derivative of log-likelihood with respect to a zero-valued vector. Initially, the reviewers raised concerns mostly regarding the limited experimentation and missing baselines. However, in the revised version, the authors addressed most of these concerns.\\n\\nGiven that most reviewers are positive after the revision and since the proposed method is simple and interesting, I recommend accepting this paper.\"}",
"{\"title\": \"Summary of revision changes\", \"comment\": \"We thank the reviewers for their valuable comments. During this rebuttal period we have made a very significant revision to the original paper, including the addition of Section 3.1, a new derivation from empirical Bayes (proof of conditional empirical bayes in Appendix A), added Table 1, showing quantitatively that GONs significantly outperform competing single-step methods, Table 2, demonstrating that variational GONs achieve substantially lower ELBO than VAEs (on 5/6 datasets) including large complex datasets (CelebA), Figure 9, showing qualitatively that GONs can well represent large complex datasets (added higher resolution LSUN Bedrooms and CelebA), Figure 2b, confirming that a single gradient step is sufficient, Figures 2c and 16, demonstrating GON generalisation, Figure 17, showing that GONs allow superresolution on test data, Table 1 and Figures 2a, 3, and 10 have been updated to include means/standard deviations (error envelopes) from multiple runs, alongside various new discussions.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your kind words and for updating the score. We have implemented these suggested changes; error bars (as envelopes) representing standard deviation have been added to Table 1 and Figures 2a, 3, and 10 however we deemed it inappropriate for the other figures due to significant line overlapping impacting the presentation of these results.\"}",
"{\"title\": \"Thanks for the edits\", \"comment\": [\"Thanks for addressing the majority of my comments, this is a much stronger paper now and I'm happy to increase my score. Some further suggestions:\", \"Table 1 should have the best results in boldface.\", \"It'd be great to have errors bars in Table 1 and 2 as well as most figures.\", \"In sec 3.6 \\\"non-linearity function\\\" should be \\\"non-linear function\\\".\", \"I like the newly-introduced argument (connections to MoE) of why GONs work well, though I must say that I am still baffled by how good it is.\"]}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for your constructive feedback and excellent suggestions. We will address your comments point-by-point:\\n\\n> \\u201cThe results are only on small scale and toyish datasets\\u201d\", \"in_our_latest_update_we_have_added_experiments_on_larger_scale_more_complex_datasets\": \"128x128 LSUN Bedrooms for the GON, and CelebA 64x64 for the variational GON.\\n\\n> \\u201cBaselines: To determine the efficacy of this method, the authors would have to compare against some similar methods including...\\u201d\\n\\nFor our non-variational approach, we have added quantitative comparisons with autoencoders+tied weights, our approach with detached gradients, our approach with multiple gradient descent steps, both with and without detached gradients, as suggested, as well as with a GLO (Bojanowski 2018) which assigns a latent vector to each data point and jointly optimises these with the network parameters (Table 1). We find that GONs achieve the lowest validation losses on 3 of the 5 datasets tested. Notably, all other single step approaches result in significantly high reconstruction loss.\\n\\nWe have also added quantitative comparisons with vanilla VAEs in Table 2, finding GONs to achieve lower ELBO on 5 of the 6 datasets. We aim to add comparisons with other variational approaches as suggested, in another update.\\n\\n> \\u201cb) the transposed-convolution used in such a setup corresponds almost exactly to the gradient of the encoder, which is an idea very similar to GONs.\\u201d\\n\\nWhile the gradient through a single convolution layer is indeed related to transposed convolutions, when convolutions are composed and/or interleaved with other functions, the gradient becomes much more complex. Indeed, the gradient of deep MLPs corresponds to a product of networks. Additionally, restricting the architecture to using tied-weights is not necessarily applicable to more complex architectures whereas using the gradient can be applied to any function.\\n\\n> \\u201cMissing links to the literature\\u2026\\u201d\\n\\nWe have now integrated discussion of autoencoders with tied weights and the connections with model-agnostic meta-learning into the Discussion section.\\n\\n> \\u201cMissing experiments: We would need more evidence to determine if such a simple method is useful. A good experiment would be e.g. on imagenet.\\u201d\\n\\nWe hope that the aforementioned additional quantitative experiments (Tables 1 and 2) including experiments on CelebA, and qualitative examples on LSUN Bedrooms assuage your concerns. Experiments have also been added to evaluate our claim that a single gradient step is sufficient and that GONs do not memorise datasets in Figures 2b and c respectively.\\n\\n> \\u201cFurther suggestions: Subfigures in fig2 and 3 (and most of figs in the appendix) use different scales on the Y axis. It would be easier to read the figures if the scaled were normalized within a single figure.\\u201d\\n\\nAll figures which we deemed legible when normalised with a single figure (previously Figures 2, 9, 10, and 11) have been changed as per your suggestion (now Figures 2 and 10).\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"> \\u201cIs Figure 7 from an explicit GON or an implicit GON? If its explicit, how are the number of parameters comparable to an implicitGON? Clearly an explicit model will have a lot more number of parameters. esp as the size of the images increase?\\u201d\\n\\nFigure 7 is from an explicit GON, the caption has been updated to clarify this. The number of parameters used for implicit GONs and explicit GONs are shown in the captions of Figure 4 and 7 respectively. While not a direct comparison since Figure 7 is a variational GON and the numbers of parameters are chosen to be approximately comparable, this does provide some insight. We find that implicit models are able to better represent data when fewer parameters are available or when the image size is larger.\\n\\n> \\u201cI really like and appreciate the variationalGON experiments. How do they compare with standard VAEs? Can they recover CelebA 64x64 images? How would they compare on quantitative metrics like FID etc.?\\u201d\\n\\nIn the latest update we have included quantitative comparisons between variational GONs and VAEs in terms of ELBO on the test set in Table 2. This includes evaluation on the CelebA 64x64 dataset. In 5 of the 6 datasets tested on, the GON approach achieves lower ELBO than the VAE. Samples from the variational GON trained on CelebA can also now be found in Figure 9b in order to assess this qualitatively.\\n\\n> \\u201cIn the super resolution experiment, can it super resolve any image from the distribution it was trained on? For e.g. in figure 5. is it just a matter of resampling the grid to 256x256 and running them through the pre-trained model for any sample from p(x)?\\u201d\\n\\nYes, it can super resolve any image thanks to the generalisation ability of GONs, and it is as simple as you state; Figure 17 has been updated to contain super-samples of images in the MNIST test set.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your excellent comprehensive feedback. We have incorporated many of your suggestions in the most recent update. We will address your comments point-by-point:\\n\\n> \\u201cThe paper is very dense in terms of ideas, and as such falls short in thoroughly evaluating all of them. For example, the paper contributes several ideas like GONs, implicit GONs, variational GONs, which is great but it would help if each one of those pieces were studied in some more detail so they can be compared and contextualized better with existing approaches. For example, in the formulation itself the GON loss is presented \\u201cas is\\u201d, but I think it warrants some more study. \\u201d\\n\\nWe have greatly expanded the method section, deriving our approach from empirical Bayes. Specifically, there is now a preliminaries section which introduces the concept of empirical Bayes and variational autoencoders in detail; additionally, the method section is now divided into sections, covering our contributions (GON, variational GON, implicit GON, and generalisations) in more detail as well as more thoroughly introducing the surrounding concepts.\\n\\n> \\u201cFor example, why is just a single step \\u201csufficient\\u201d to estimate \\u201cz\\u201d? Does the quality of \\u201cz\\u201d improve if you take multiple smaller steps? How stable is this for different datasets? The empirical studies show promise, that indeed this can work reasonably well in reconstructing different datasets, but it would greatly help to justify some of these choices further. \\u201d\\n\\nOur new derivation from empirical Bayes shows that if we consider z_0 a noisy approximation of z, then we can use a single gradient step to calculate the expected value of p(z|x). As mentioned in our response to Reviewer 3, we also provide an explanation from a function approximating perspective, namely, the derivative of a deep MLP corresponds to a product of networks, allowing efficient modelling of high dimensional data. \\n\\nWe have added experiments to evaluate the claim that a single step is sufficient in Figure 2b, finding that when jointly optimised, multiple steps offer no notable improvement over a single step, and training with gradients of z detached results in significantly worse performance. This can also be seen over a broad range of datasets, trained over long periods, in Table 1.\\n\\nIn terms of stability, we have observed no issues when training these models. Standard architectures are used and the Adam optimiser with default values. This is the case even on high resolutions data (up to 128x128 images tested). Additionally, we find them to be extremely consistent over multiple runs.\\n\\n> \\u201cIn the explicit case, how important is the choice of \\u201cF\\u201d ? The choice of activation function is explored but what about the architecture/ number of parameters for a given dataset?\\u201d\\n\\nIn Figure 10b the effect of number of parameters is explored. We find that GONs outperform their equivalent autoencoder in all cases. As the number of parameters is increased, this lead lessens due to diminishing returns. While we only use simple architectures, we have found all variations (e.g. upsampling, transposed convolutions, instance normalisation) to be effective.\\n\\n> \\u201cIn all the experiments, the reconstruction losses are shown are for the training set, how do the validation set samples get reconstructed? It\\u2019s not clear if GONs are so effective in reconstructing because they are memorizing the data?\\u201d\\n\\nWe have added extra experiments to assess this. In Figure 2c, Table 1, and Figure 16 training and validation losses are plotted for both GONs and their equivalent autoencoder; this demonstrates that GONs not only do not memorise the data, but appear to generalise better than autoencoders. \\n\\n> \\u201cHow does the performance of GONs change as the size of the output space grows larger? For e.g. 128x128 or 256x256?\\u201d\\n\\nWe find the performance to be on par with smaller sized outputs. This is evaluated qualitatively by reconstructing 128x128 LSUN Bedrooms data with a convolutional GON in Figure 9a where a substantial amount of detail is modelled.\\n\\n> \\u201cSome of the terminology is also confusing. What does it mean when you \\u201coverfit\\u201d to an entire distribution? I understand its usage for a single image, but it's not clear what this means for an entire dataset. Are the samples from Figure 4 all from the same trained GON?\\u201d\\n\\nThank you for pointing out this misnomer, the caption has been adjusted accordingly. This experiment is meant to compare with implicit representation networks which are trained on a single image. We show that GONs can represent whole datasets to a high degree of fidelity. To answer your question explicitly, the images in Figure 4 are indeed all from the same trained GON.\\n\\n(Continued Below)\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your very helpful feedback, we will try to address your comments point-by-point:\\n\\n> \\u201cThe model assumption that the one step gradient from zero vector equals to latent vector is quite limited and greatly constrains the model expressiveness. A justification that such assumption is reasonable is badly needed.\\u201d\\n\\nWe have updated the paper, deriving our approach from empirical Bayes. In summary, for a latent variable z and data point x, if we have a noisy observation of z, i.e. z_0=z+N(0,I), then empirical Bayes\\u2019 allows us to obtain the expected value of p(z|x) using a single gradient step. When a normally distributed prior is assumed over z, then we can choose z_0 as the origin since it is the mean of p(z_0).\\n\\nWe also provide an explanation, from a function approximating perspective:\\n1. The gradient itself is a non-linear function that can approximate functions: the derivative of a deep MLP corresponds to a product of networks.\\n2. Using the gradient as an encoder offers good initialisation since it inherently provides an improved estimate of the latent vector.\\n3. Good latent spaces should satisfy local consistency (points close in latent space should be close in output space). Similar data points have similar gradients so this is satisfied. The exact gradient is thus relatively unimportant; the network\\u2019s prior must be the gradient operation but since the gradient is relatively unimportant, this does not severely restrict the network\\u2019s expressiveness.\\n\\n> \\u201cFormulation needs to be carefully checked. For example, Eqn 2 is not entirely correct to me. The second term should not be binary cross entropy as there is no categorical variable involved. Also, please avoid using abbreviations (L^BCE, L^CCE) at the first time to introduce them, which are confusing.\\u201d\\n\\nThank you for pointing this out. The variational GON formulation has been altered to be more general.\\n\\n> \\u201cExperimental results are not sufficient to demonstrate the efficacy. Need more quantitative analysis and experiments on more challenging datasets.\\u201d\\n\\nWe have added a number of additional experiments analysing the GON formulation including the effect of multiple gradient descent steps, confirming our hypothesis that a single step is sufficient (Figure 2b) and the ability for GONs to generalise (Figure 2c); qualitative experiments on more challenging datasets: reconstructions on LSUN Bedrooms (Figure 9a) and samples from a variational GON trained on CelebA (Figure 9b); and quantitative analysis comparing GONs with other approaches as suggested by Reviewer 1 where we find GONs to be competitive with multi-step approaches (Table 1) as well as a comparison between variational GONs and VAEs in terms of validation ELBO (Table 2). In a future update, more experiments will be added.\\n\\n> \\u201cThe claim that it saves parameters compared to VAE is confusing. In the variational version, parametrizations of mu(x) and sigma(x) are also required. A principled way to very this claim is to show that with the variational version, the method could use much less parameters compared VAE while has the better synthesis quality.\\u201d\\n\\nTo implement a variational GON we integrate the reparameterization trick into the decoder network. Specifically, the forward pass takes input z, is mapped by two linear layers to mu(z) and sigma(z), the reparameterization trick is applied, then the rest of the function is performed to obtain p(x|z). This allows us to use the GON update step to obtain z from the original z_0, while still parameterising mu and sigma. The parameters are thus reused in the derivative as the encoder so that there are just under half as many. We have attempted to clarify this in the variational GON section of the method. As suggested, we quantitatively verify this in Table 2 and find that the variational GON achieves lower validation ELBO than an equivalent VAE with almost twice as many parameters on 5 of the 6 datasets.\"}",
"{\"title\": \"Lack of solid formulation and strong experiments\", \"review\": \"This paper proposes a new type of generative models with a new inference method of latent variables. Specifically, the gradient of latent variables with respect to zero vector is taken as the inferred latent variables. Based on this, the authors generalize the propose model to implicit and variational versions and demonstrate the models on image datasets.\", \"pros\": \"the proposed method is easy and straightforward to implement.\", \"cons\": \"1. The model assumption that the one step gradient from zero vector equals to latent vector is quite limited and greatly constrains the model expressiveness. A justification that such assumption is reasonable is badly needed.\\n\\n2. Formulation needs to be carefully checked. For example, Eqn 2 is not entirely correct to me. The second term should not be binary cross entropy as there is no categorical variable involved. Also, please avoid using abbreviations (L^BCE, L^CCE) at the first time to introduce them, which are confusing. \\n\\n3. Experimental results are not sufficient to demonstrate the efficacy. Need more quantitative analysis and experiments on more challenging datasets. \\n\\n4. The claim that it saves parameters compared to VAE is confusing. In the variational version, parametrizations of mu(x) and sigma(x) are also required. A principled way to very this claim is to show that with the variational version, the method could use much less parameters compared VAE while has the better synthesis quality. \\n\\nOverall, the method proposed in this paper is new and promising. However, given the current unclear formulation and lack of strong experimental results, I recommend a rejection.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An interesting new perspective on generative modeling and implicit representation learning, but incomplete in its execution.\", \"review\": \"The paper proposes GONs which seek to build a generative model with an \\u201cimplicit\\u201d encoder that comes essentially for free with the use of a few re-parameterization tricks. The main idea being that existing generative models with an encoder are \\u201credundant\\u201d in that the decoder itself has the ability to compute the gradient with respect to a latent vector, z, which itself can be thought of as the \\u201cencoding\\u201d. Since the choice of what initial latent vector to choose arises here, the paper advocates for simply choosing a z_0 which is a zero vector. In addition to the \\u201cexplicit\\u201d formulation, there is also an implicit GON which is proposed that can generalize implicit generative models (like SIREN) to entire distributions as opposed to a single data point, as they are currently used.\\n\\nOverall, I think this is very interesting work but incomplete. Considering GONs are a completely new category of generative models, it would greatly help to study each piece in more detail (theoretically or empirically) to establish what makes GONs successful, different, and how this improves our understanding of implicit representations in neural networks.\", \"strengths\": [\"An interesting and novel formulation of encoding schemes from decoders that do not need any additional training or networks.\", \"The paper explores several different variants of GONs \\u2014 from a variational alternative, implicit, and a classifier. Which greatly expands its scope of application in new problems.\", \"GONs generalize implicit generative models like SIRENs to work with an entire data distribution with very few parameters, which I think is a great benefit. This also naturally allows for variational alternatives, meaning we can sample from complex high dimensional distributions using very simple networks.\", \"The implicit GON also enables finer grid sampling in the input space, enabling its use in applications like super resolution naturally \\u2014 but to any image from the training distribution.\"], \"weaknesses\": [\"The paper is very dense in terms of ideas, and as such falls short in thoroughly evaluating all of them. For example, the paper contributes several ideas like GONs, implicit GONs, variational GONs, which is great but it would help if each one of those pieces were studied in some more detail so they can be compared and contextualized better with existing approaches. For example, in the formulation itself the GON loss is presented \\u201cas is\\u201d, but I think it warrants some more study.\", \"For example, why is just a single step \\u201csufficient\\u201d to estimate \\u201cz\\u201d? Does the quality of \\u201cz\\u201d improve if you take multiple smaller steps? How stable is this for different datasets? The empirical studies show promise, that indeed this can work reasonably well in reconstructing different datasets, but it would greatly help to justify some of these choices further.\", \"In the explicit case, how important is the choice of \\u201cF\\u201d ? The choice of activation function is explored but what about the architecture/ number of parameters for a given dataset?\", \"In all the experiments, the reconstruction losses are shown are for the training set, how do the validation set samples get reconstructed? It\\u2019s not clear if GONs are so effective in reconstructing because they are memorizing the data?\", \"How does the performance of GONs change as the size of the output space grows larger? For e.g. 128x128 or 256x256?\", \"Some of the terminology is also confusing. What does it mean when you \\u201coverfit\\u201d to an entire distribution? I understand its usage for a single image, but it's not clear what this means for an entire dataset. Are the samples from Figure 4 all from the *same* trained GON?\", \"Is Figure 7 from an explicit GON or an implicit GON? If its explicit, how are the number of parameters comparable to an implicitGON? Clearly an explicit model will have a lot more number of parameters. esp as the size of the images increase?\", \"I really like and appreciate the variationalGON experiments. How do they compare with standard VAEs? Can they recover CelebA 64x64 images? How would they compare on quantitative metrics like FID etc.?\", \"In the super resolution experiment, can it super resolve *any* image from the distribution it was trained on? For e.g. in figure 5. is it just a matter of resampling the grid to 256x256 and running them through the pre-trained model for any sample from p(x)?\", \"---------- Update on the revised manuscript ----------\", \"I have read the new version of the paper and it reads a lot better. The new expanded methods section, and the definitions for different variations of GONs makes the paper much stronger and easier to understand. I appreciate and like the new experiments that show GONs capabilities on LSUN, comparisons with VAE on ELBO.\", \"Most of my concerns have been addressed in this version. I think this paper makes an interesting and novel contribution and I will raise my score accordingly.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Very interesting paper and findings, but seems somewhat rushed.\", \"review\": \"This paper introduces a \\\"new\\\" inference method for autoencoder-type models, where the encoder is taken as a gradient of the decoder with respect to a zero-initialized latent variable. The method is evaluated for both a deterministic autoencoder and a VAE on toy image data (cifar10 being the most complex of them) and applied to convolutional decoder and to SIREN-type implicit representation networks. This is, for all intents and purposes, a single step iterative inference setup. In its VAE variant it is extremely similar to old-school iterative inference, albeit with a single gradient step.\\n\\nThe paper is very-well written and interesting. The method seems to be getting very good results,. Still, the paper seems to be rushed. The results are only on small scale and toyish datasets, and there are very few baselines. \\n\\nIn its current state I recommend rejection due to rather limited novelty (although it's cool to see that this type of inference works for implicit scene representations) and very limited evaluation. There are also very many links to existing literature that are not properly described. Let me elaborate.\", \"baselines\": [\"To determine the efficacy of this method, the authors would have to compare against some similar methods including:\", \"old-school multi-step variational inference\", \"semi-amortized variational inference\", \"the proposed method with multiple gradient steps\", \"the proposed method with detached gradient (as in: not use 2nd order gradients)\", \"a fully-convolutional autoencoder with parameters tied between the encoder and decoder. This is for two reasons: a) this would reduce the number of parameters by half, making it more similar to GON, but also b) the transposed-convolution used in such a setup corresponds almost exactly to the gradient of the encoder, which is an idea very similar to GONs.\"], \"missing_links_to_the_literature\": [\"the above fully-conv AE setup.\", \"model-agnostic meta-learning (and related, e.g. CAVIA, LEO etc), where the \\\"latents\\\" are produced by single- or multi-step optimization.\"], \"missing_experiments\": \"We would need more evidence to determine if such a simple method is useful. A good experiment would be e.g. on imagenet.\", \"further_suggestions\": \"Subfigures in fig2 and 3 (and most of figs in the appendix) use different scales on the Y axis. It would be easier to read the figures if the scaled were normalized within a single figure.\", \"update\": \"I've updated the score given the authors' response, see my comment below.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
RGJbergVIoO | On the mapping between Hopfield networks and Restricted Boltzmann Machines | [
"Matthew Smart",
"Anton Zilman"
] | Hopfield networks (HNs) and Restricted Boltzmann Machines (RBMs) are two important models at the interface of statistical physics, machine learning, and neuroscience. Recently, there has been interest in the relationship between HNs and RBMs, due to their similarity under the statistical mechanics formalism. An exact mapping between HNs and RBMs has been previously noted for the special case of orthogonal (“uncorrelated”) encoded patterns. We present here an exact mapping in the case of correlated pattern HNs, which are more broadly applicable to existing datasets. Specifically, we show that any HN with $N$ binary variables and $p<N$ potentially correlated binary patterns can be transformed into an RBM with $N$ binary visible variables and $p$ gaussian hidden variables. We outline the conditions under which the reverse mapping exists, and conduct experiments on the MNIST dataset which suggest the mapping provides a useful initialization to the RBM weights. We discuss extensions, the potential importance of this correspondence for the training of RBMs, and for understanding the performance of feature extraction methods which utilize RBMs. | [
"Hopfield Networks",
"Restricted Boltzmann Machines",
"Statistical Physics"
] | Accept (Oral) | https://openreview.net/pdf?id=RGJbergVIoO | https://openreview.net/forum?id=RGJbergVIoO | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"7qPDF3AmMk",
"b0xM0xCALxM",
"lNDKmpzthAl",
"PRsCB4-5r62",
"qWqEjzOXhGX",
"KrLbX7asabB",
"lSYPM0mjLSJ",
"qHPewiE3o1",
"mkAx8mWK_S"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040425984,
1606169117617,
1605563709307,
1605563592823,
1605562996917,
1605562890171,
1604118994095,
1603736228630,
1603621212508
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3693/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3693/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3693/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3693/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3693/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3693/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3693/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3693/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Oral)\", \"comment\": \"Two knowledgeable reviewers were positive 7 and very positive 10 about this paper, considering it an important contribution that illuminates previously unknown aspects of two classic models, namely RBMs and Hopfield networks. They considered the work very well developed, theoretically interesting and also of potential practical relevance. A third reviewer initially expressed some reservations in regard to the inverse map from RBMs to HNs and the experiments. Following the authors' responses, which the reviewer found detailed and informative, he/she significantly raised his/her score to 7, also emphasizing that he/she hoped to see the paper accepted. With the unanimously positive feedback, I am recommending the paper to be accepted.\"}",
"{\"title\": \"Additional response to Reviewer 3\", \"comment\": \"Please note that we have uploaded an updated version of the revised manuscript.\\n\\nTo further address point (1) of Reviewer 3 (*\\\"The authors should provide experiments on the reverse mapping as suggested above.\\\"*), Appendix D.3 now contains an example of the approximate reverse mapping along with an example of the performance on an associative memory task. \\n\\nWe hope this addresses the reviewer's concerns.\"}",
"{\"title\": \"Response to Reviewer 3 (pt 1/2)\", \"comment\": \"We thank the reviewer for their careful reading and constructive feedback towards improving our manuscript. We address the reviewer\\u2019s points below (numbered in order) and have updated our manuscript accordingly:\\n \\n(1) *\\\"The authors should provide experiments on the reverse mapping as suggested above.\\\"*\\nThis is an excellent point. We agree with the reviewer that better understanding the reverse mapping is an important next step. We see as one of the important applications of the reverse mapping its potential to provide insight into what classes the RBM has \\u201clearned\\u201d after training. As our preliminary results suggest (Appendix D.2), the reverse mapping is most likely to be feasible when the RBM weights are approximately orthogonal. Thus, a prerequisite step would be to incorporate an orthogonality constraint to the weight updates during CD-k training. However, given the time constraints, we feel that the full investigation is beyond the scope and the focus of the current manuscript. We hope this addresses the reviewer\\u2019s point. \\n\\n(2) *\\\"I do not agree that figure 3 shows that RBM training \\\"simply 'fine tunes'\\\" the weights -- the difference is quite stark. How about increasing the batch size so that there is little SGD noise?\\\"*\\nWe thank the reviewer for pointing this out. We have increased the batch size to 1000. With this batch size, the weights after 50 epochs are now significantly closer to the HN initialization than with the previous lower batch size. This emphasizes the fact the HN initialization performs extremely well without any or with very little amount of training (see also responses to points (4) and (5) below). We have accordingly updated Fig. 3, Fig. 4a (to show convergence with the larger batch size), and the wording above Fig. 3.\\n\\n(3) *\\\"Figure 4a: traces are cut-off just when random initialization is catching up with HN initialization. This also applies to Figure 5.\\\"*\\nWe have extended Fig. 4a to 60 epochs to show that the random initialization converges to the same value as HN initialization. We have included in Section E.3 an extended version of Fig. 5 (including extra initial conditions, see point (5)) with training to 100 epochs to better show convergence. \\n\\n(4)*\\\"There are a few descriptions suggesting \\\"HN init. appears to train much faster than random init\\\". However, the rate of increase in of likelihood in Figure 4 is shallower for HN than for rand init. Is the advantage only at the 0'th RBM epoch?\\\"*\\nWe apologize for the confusing phrasing. By faster we mean that it is closer to its peak value after a smaller number of epochs. Indeed, HN initialization converges fastest within the 0\\u2019th epoch, and after that the rate of convergence is slower (because HN initialization has almost reached the limit). We have re-phrased the appropriate parts of the manuscript more precisely. This also emphasizes the fact that HN initialization performs very well already with very limited training (see also response to points (2) and (5)).\\n \\n(5) *\\\"The author only compared with purely random initialisation...\\\"*\\nThis is an excellent point. We had initially focused on the random initialization because it commonly used. To address this concern, at the reviewer's suggestion, we have performed additional experiments with two alternative initializations: (1) PCA and (2) the \\\"Hebbian\\\" Hopfield mapping developed in previous work for uncorrelated patterns (our mapping uses the \\u201cprojection\\u201d Hopfield Network, denoted \\u201cHN\\u201d below). We have updated Fig. 4, Fig. 5, and the text accordingly. \\n\\nInterestingly, in Fig. 4 although all four initializations eventually converge to the same limit, all three \\u2013 HN, PCA and \\u201cHebbian\\u201d initialization perform significantly better than the random initialization after relatively limited amount of training. Importantly, HN initialization outperforms both PCA and Hebbian initialization at early times \\u2013 emphasizing the fact that HN initialization performs well with zero CD-k training. For the classification objective (Fig. 5), the advantage of HN relative to PCA and Hebbian is more pronounced: for instance, for 100 sub-patterns the PCA becomes comparable to HN only after ~5 epochs, and further emphasizes the performance of HN with zero training. For classification, Hebbian is significantly lagging behind (projection) HN, which is now shown in the appendix Fig. E.1 (along with longer training time, see point (3)). \\n\\nThis could be of potentially applied importance for rapid training of very large datasets, but we wish to emphasize that the main theoretical point of the paper is that we provide a novel mapping between two classical models, which as an added benefit provides a reasonable and potentially useful RBM initialization. Furthermore, future work focusing on the differences in learning during the first few epochs (among the various initializations) may provide insights into what is actually being \\u201clearned\\u201d by the RBM during this time.\"}",
"{\"title\": \"Response to Reviewer 3 (pt 2/2)\", \"comment\": \"(6) *\\\"In the classification objective, if I understand correctly, the feature function is essentially quadratic in the input patterns. Should there be an ideal test error that is computed by a quadratic neural network trained with supervision by backpropagation? If the HN classifier (blue in Figure 5) is approaching this idealization, then this will strengthen the claim.\\\"*\\nThe reviewer is correct that the feature function is quadratic in the input states (each feature requires a $N \\\\times k$ matrix of weights $W_{ik}^{\\\\mu}$ ). We are not familiar with theoretical results/bounds on feedforward neural network classification performance when using analogous quadratic layers. We note that the fact that both PCA and HN for 100 sub-patterns converge to the same limit after many epochs indicates the possibility that they may be reaching such a bound, although we are not able to prove it at this point. We have updated the text accordingly.\\n\\n(7a) *\\\"The discussion on the extension to more generic, deep architectures is not well supported, and I do not see the extension to be so straightforward given the content of the current paper.\\\"*\\nWe agree that to establish the feasibility of such mappings will require future research and we have revised the text accordingly. Further, to substantiate this suggestion we have added an example to Appendix C (see Eq. (C.5)) showing one way that higher-than-2 layer networks can arise from these ideas.\\n\\n(7b) *\\\"In generation, supervised labels and clustering are used to simplify learning. Is the network able to learn just on the MNIST digits, even for the real images within a single class (e.g. \\\"7\\\")?\\\"*\\nWe are unsure if this is related to the reviewer's previous bullet point (we split them). The hopfield network, and associated mapping, relies on having access to \\\"patterns\\\". These patterns can be the centroids of labelled clusters, for example. The clustering does not have to be supervised. In that sense it can learn on just the MNIST digits from a single class, which is roughly what is done in the classification section.\\n \\n(8) *\\\"Can the authors try to characterise whether the HN initialisation is related to log-likelihood training? I wonder if there is any interesting theory; otherwise, measuring model performance by log-likelihood seems a bit arbitrary (though it makes the comparison to contrastive divergence easier).\\\"*\\nMaximizing the log-likelihood of the data is equivalent to minimizing the KL divergence between the model distribution and the data distribution (a standard generative objective). This is why we display it in Fig. 4.\\n\\nThe intuitive reason for why the HN initialization works well is because the Hopfield patterns capture the key features/prototypes from the dataset. The projection rule encodes the patterns (and nearby states) as high probability basins in the free energy landscape. Because the data itself is clustered near the patterns, these basins model the true data distribution well, which is reflected in the good generative performance without training. We hope this addresses the reviewer's question and are happy to discuss further. \\n\\n*\\\"Detailed suggestions (not to affect decision):\\\"*\\nIn addition to the points above, we have also incorporated the reviewer's detailed suggestions into the manuscript. Namely, additional references to Hinton's early work on RBMs, further detail near Eq. (7), and correction of the indicated typos. \\n\\nWe thank the reviewer again for their excellent suggestions. We hope that these changes address their concerns.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the reviewer for their detailed review of our manuscript and positive feedback. In the updated version of our manuscript, we have corrected the noted typos and adjusted the text near Eq. (3), (8), and (12). We have also clarified the comments on the limited capacity of stored sub-patterns ($10k/N < 1$), and on the qualitative similarity between Fig. 3a and Fig. 3b (we updated the figure after training with a larger batch size as suggested by another reviewer).\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for their time and positive feedback. Many thanks for the strong support of our work!\"}",
"{\"title\": \"A nice theoretical exposition and result!\", \"review\": \"The paper demonstrates a mathematical equivalence between Hopfield nets and RBMs, and it shows how this connection can be leveraged for better training of RBMs.\\n\\nWhat a great paper - well written, an enlightening mathematical connection between two well-known models that to my knowledge was not previously known. Hopfield nets and RBM's have been around for decades, and I don't think we've been aware of this connection, so it seems like a pretty important finding. The paper explores the utility of this connection by applying to an MNIST task. Interestingly, the connection yields important insights in both directions: stochastic sampling in an RBM is faster than Hopfield due to a smaller matrix and parallel layer wise updates, whereas initializing an RBM with the projection rule from Hopfield allows it to find a better solution faster.\\n\\nI really enjoyed reading the paper, I learned something new, and I think others will too! It is an important advance in our understanding of Hopfield nets and RBMs.\", \"rating\": \"10: Top 5% of accepted papers, seminal paper\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"ICLR review for \\\"On the mapping between Hopfield networks and Restricted Boltzmann Machines\\\"\", \"review\": \"This paper considers a mapping between the well known Hopfield Neural Networks and Restricted Boltzmann Machines. In contrast with previous literature that consider the case where the patterns / data features to memorize were uncorrelated, the authors extend the mapping to arbitrarily correlated patterns, which allows to consider much more realistic settings. The mapping is computationally speaking relatively cheap. This mapping is shown to allow for significantly better initialization (than random) of the weights of a RBM, in the sense that the training is then much faster to reach comparable generative and/or generalization performance. In this sense the mapping is not only interesting from a theoretical point of view, but also practically. This paper should be considered as an applied one, as there is no real analytic theory of why this mapping helps the learning, but the experiments are well carried: the boost in learning is demonstrated through experiments in MNIST data, and the results are well explained and convincing. The appendices are also well written and are a good addition to the main part. Overall the paper is well written (the paper can be used by non-specialists also as introduction to Hopfield NNs and RBMs), the results are interesting and relevant to the ML community, the paper can be read without much effort. Even if RBM are not anymore state-ot-the art generative models, the results are encouraging and might lead to future improvements in more modern architectures. I have no specific concern. The paper is overall very well written. The paper is slightly incremental as similar mappings were known, but it remains a relevant contribution, and the aspect of using this mapping as a way to boost learning in RBM seems new, and interesting. I recommend publication after slight corrections, see below.\", \"typos_and_corrections\": \"_text below (1): J=1/N Xi^T Xi^T ->1/N Xi Xi^T \\n_(3): please detail the last equality\\n_(8) is true only for the lambda that verify the fixed point / saddle point equations: please mention it\\n_below (11): the p the columns of -> the p columns of\\n_(12): please explain what is GL_p(R)\\n_\\\"At the other end, 0 \\u226a 10k/N < 1, ...\\\" : Any x > 0 is >> 0, so please be more precise\\n_Above Fig 3: \\\"appear qualitatively similar\\\" : this is not obvious...\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A theoretical link between BN and RBM, but experiments are worth improving\", \"review\": [\"# Summary\", \"This paper shows a relationship between the project rule weights of a Hopfield network (HN) and the interaction weights in a corresponding restricted Boltzmann machine (RBM). The mapping from HN to RBM is facilitated by realising that the partition function of BN can be seen as the partition function of a binary-continuous (Bernoulli-Gaussian) RBM. The authors comments on the mapping from RBM to BN. The experiments show the advantages of training RBM with weights initialised from BN projection weights in generation and classification.\", \"## Strong points:\", \"I am not familiar with the literature, but the results seem new to me.\", \"The experiments show advantages of BN initialisation, pointing to new directions of improving RBM training.\", \"The paper is fairly clearly written.\", \"## Weak points\", \"The HN -> RBM mapping is quite clear, but the reverse RBM -> HN mapping is not very well established, and there are no experiments showing how effective the approximate reverse mapping works on associative memory tasks typical for HNs. I also believe this lowers the impact of this paper, given that the forward mapping is based on a simple revelation.\", \"The authors' description of the experimental results are not accurate enough. and the results raise several questions to be addressed.\", \"# Recommendation\", \"I'm in favour of rejection, but some concerns can be addressed fairly easily (with experiments) so I'm open to raising my score if questions are well-addressed.\", \"## Issues and questions to address\", \"The authors should provide experiments on the reverse mapping as suggested above.\", \"I do not agree that figure 3 shows that RBM training \\\"simply 'fine tunes'\\\" the weights -- the difference is quite stark. How about increasing the batch size so that there is little SGD noise?\", \"Figure 4a: traces are cut-off just when random initialization is catching up with HN initialization. This also applies to Figure 5.\", \"There are a few descriptions suggesting \\\"HN init. appears to train much faster than random init\\\". However, the rate of increase in of likelihood in Figure 4 is shallower for HN than for rand init. Is the advantage only at the 0'th RBM epoch?\", \"The author only compared with purely random initialisation, which is perhaps the most naive baseline. I would suggest comparing to a (slightly) more clever initialisation, perhaps PCA or something better (those mappings in previous work the authors cited and in Appendix B). Or, the authors could also initialise the RBM by first training it on the within-class cluster centres (using a very large number of sleep samples for the sleep-phase) which may also be a more fair comparison?\", \"In the classification objective, if I understand correctly, the feature function is essentially quadratic in the input patterns. Should there be an ideal test error that is computed by a quadratic neural network trained with supervision by backpropagation? If the HN classifier (blue in Figure 5) is approaching this idealization, then this will strengthen the claim.\", \"The discussion on the extension to more generic, deep architectures is not well supported, and I do not see the extension to be so straightforward given the content of the current paper. In generation, supervised labels and clustering are used to simplify learning. Is the network able to learn just on the MNIST digits, even for the real images within a single class (e.g. \\\"7\\\")?\", \"Can the authors try to characterise whether the HN initialisation is related to log-likelihood training? I wonder if there is any interesting theory; otherwise, measuring model performance by log-likelihood seems a bit arbitrary (though it makes the comparison to contrastive divergence easier).\", \"# Detailed suggestions (not to affect decision)\", \"Reference to RBMs should include more historic ones from Hinton (e.g. 2006)\", \"I do not see the purpose of (7) and (8), and they are only referred to in the Appendix (the review content in the Appendix is informative by itself though).\", \"Eqn (11), should it be $w_\\\\mu w_\\\\mu^T$ in the sum?\", \"Third line above (16), should $H$ and $Z$ be indexed by $\\\\mu$?\", \"Line above (D.4), $WW^T = \\\\dots B_p^T$?\", \"==== update ====\", \"I thank the authors for providing such detailed response. All my concerns are addressed and reflected in the revision (though some are much better done than the rest). I congratulate the authors on their spirit of maintaining a high standard on the theory, experiments and descriptions, and therefore significantly raise my score. I hope to see this paper accepted.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
rWZz3sJfCkm | Efficient Generalized Spherical CNNs | [
"Oliver Cobb",
"Christopher G. R. Wallis",
"Augustine N. Mavor-Parker",
"Augustin Marignier",
"Matthew A. Price",
"Mayeul d'Avezac",
"Jason McEwen"
] | Many problems across computer vision and the natural sciences require the analysis of spherical data, for which representations may be learned efficiently by encoding equivariance to rotational symmetries. We present a generalized spherical CNN framework that encompasses various existing approaches and allows them to be leveraged alongside each other. The only existing non-linear spherical CNN layer that is strictly equivariant has complexity $\mathcal{O}(C^2L^5)$, where $C$ is a measure of representational capacity and $L$ the spherical harmonic bandlimit. Such a high computational cost often prohibits the use of strictly equivariant spherical CNNs. We develop two new strictly equivariant layers with reduced complexity $\mathcal{O}(CL^4)$ and $\mathcal{O}(CL^3 \log L)$, making larger, more expressive models computationally feasible. Moreover, we adopt efficient sampling theory to achieve further computational savings. We show that these developments allow the construction of more expressive hybrid models that achieve state-of-the-art accuracy and parameter efficiency on spherical benchmark problems. | [
"efficient",
"computer vision",
"natural sciences",
"analysis",
"spherical data",
"representations",
"equivariance",
"rotational symmetries"
] | Accept (Poster) | https://openreview.net/pdf?id=rWZz3sJfCkm | https://openreview.net/forum?id=rWZz3sJfCkm | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"ceU2EwYANL-",
"5WiqP8b4WHV",
"PN1KIV6var1",
"3C8oxTfRrvA",
"jnAcoWOHarq",
"TRSa7hQmhUR",
"Hsi2LxsuSsX",
"WnuBqrR80-u",
"hcqIgP3QFdt",
"EwuNng7dIAf",
"xTWYk96CIM0",
"U0y8uGYjYRj",
"hf9JjN8IWpz"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040483860,
1605706954590,
1605706915403,
1605706785299,
1605706747357,
1605706627126,
1605706565673,
1605706396356,
1605706334105,
1603949577653,
1603929189445,
1603901550609,
1603207901205
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3692/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3692/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3692/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3692/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3692/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3692/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3692/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3692/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3692/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3692/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3692/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3692/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes an efficient approach for computing equivariant spherical CNNs, significantly reducing the memory and computation costs. Experiments validate the effectiveness of the proposed approach.\", \"pros\": \"1. Speeding up equivariant spherical CNNs is a valuable topic in deep learning. \\n2. The proposed approach is effective, in all parameter size, memory footprint and computation time.\\n3. The theory underpinning the speedup method is sound.\", \"cons\": \"1. The readability should be improved. Two of the reviewers complained that the paper is hard to read and only Reviewer #2 reflected that it is \\\"easy\\\" to read (but only under the condition that the readers are familiar with the relevant mathematics), and this situation is improved after rebuttal. Nonetheless, this should be further done.\\n2. The experiments are a bit limited. This may partially be due to limited benchmark datasets for spherical data, but for the existing datasets used for comparison, Esteves et al. (2020) is not compared on all of them. Esteves et al. (2020) is only reported on spherical MNIST, which has very close performance to the proposed one. This worries the AC, who is eager to see whether on QM7 and SHREC\\u201917 the results would be similar. \\n\\nAfter rebuttal, three of the reviewers raised their scores. So the AC recommended acceptance.\"}",
"{\"title\": \"Response to comments - 2/2\", \"comment\": \"*\\\"It should be clarified in the paper that full mixing happens only across several layers (as many as the maximum path length / tree width in the MST). The question then arises whether full mixing actually happens in the considered architecture, given that it is not very deep.\\\"*\\n\\nNote that the MST-set for degree zero contains the self-loops (\\\\ell,\\\\ell) for all other degrees. Therefore even with just a single application of a tensor-product non-linearity all of the information in the input is in some way connected to the output and is therefore not lost. This confusion may have arisen due to an oversight on our part where we neglected to mention in our original submission that we paired the edges of the MST with the self-loop edges. The sets were visualized correctly in Figure 2, but not originally described accurately in the text. We have now corrected the description in Section 3.1.3 and apologize for any previous confusion.\\n\\n*\\\"It would be interesting to see actual implementation details in some DL framework, as well as wallclock timings. Also, code would be much appreciated.\\\"*\\n\\nDue to the double-blind review process, we have not mentioned our code, which is implemented in TensorFlow, in the submitted paper at present (since that would unblind us). If accepted, we plan to mention the code in a footnote. While the code is not currently public it is available on request.\\n\\nWe thank the referee for the suggestion about computational cost. We agree that quantifying the implication of the reduced complexity on flop count (and memory overhead) is useful for readers. We have therefore included a brief discussion in a new Section 3.1.4, which refers to new comparisons presented in greater detail in Appendix F. As expected, these new quantitative comparisons of computational cost and memory requirements demonstrate the considerable savings provided by our approaches. We thank the referee once more for making this suggestion, which we hope has resulted in a marked improvement in the paper.\\n\\n*\\\"The appendix describes a method for enforcing spatial localization of the spectral filters, but it is not clear from the paper if/how this is actually used in the network architecture that is tested.\\\"*\\n\\nWe do indeed enforce spatial localization of the spectral filters as described in the Appendix and have now confirmed this in Section 4.0.\\n\\n*\\\"It would be nice to know why the initial convolution layers are necessary, instead of just using the generalized layers introduced in this paper in their full glory.\\\"*\\n\\nThis was partly because we were keen to demonstrate the manner in which these layers can be leveraged alongside each other but also simply because we found empirically that models with traditional spherical convolutions in the early layers and generalized layers later on performed very well. Our intuition is that this is likely due to an increased importance of defining filters with few parameters and nice real-space properties in the early layers where low-level features are being learnt. We plan to further investigate how we might encode similar properties for the efficient generalized layers. \\n\\n*\\\"I may have missed it, but could not figure out what L_G^(psi) refers to in 2.6.\\\"*\\n\\nL_G^(psi) is introduced at the end of Section 2.4. We've now added additional clarification in 2.6 as well.\"}",
"{\"title\": \"Response to comments - 1/2\", \"comment\": \"We thank the referee for their comments. We respond to each comment in turn below. The referee's original comments are italicized, while our responses are given in roman font. All revisions to the manuscript are highlighted in red.\\n\\n*\\\"The paper introduces a framework for computationally efficient and exactly rotation-equivariant spherical CNNs. The work most closely resembles the Fourier space method of Kondor et al., but improves on it in a number of ways: firstly, a channel-wise structure is introduced for the tensor product nonlinearities, which avoids the degree blowup of this operation while still allowing mixing between different harmonic degrees. Secondly, computational complexity of linear layers is reduced by factorizing it into three operators, two of which operate similar to depthwise-separable convolutions and one of which acts uniformly across channels. Thirdly, an optimized sparse degree mixing set is proposed, based on a minimum spanning tree. Finally, a more efficient sampling theorem is used that reduces the Nyquist rate by a factor of two compared to the ones used in previous works on spherical CNNs.*\\\"\\n\\nWe thank the referee in particular for their accurate and concise summary of our work.\\n\\n*\\\"The paper is very well written and the authors clearly have a thorough understanding of the noncommutative harmonic analysis involved. This does not mean the paper will be easy to understand for all readers, but for those familiar with the relevant mathematics, either from textbooks or earlier works in the spherical CNN literature, the paper is very readable. The proposed improvements make a lot of sense to me, and their computational complexity improvements are clearly stated. The performance of a network architecture that includes the new layers is tested and shown to yield competitive or state of the art performance on several benchmark problems that have been used in many previous works.\\\"*\\n\\nWe are pleased that the referee believes the paper to be very readable for those familiar with the relevant mathematics. We have made revisions to try and make it more readable to those not so familiar with this background, but the paper remains necessarily technical. These revisions include additional details and explanations, making greater use of standalone equations and adding references to additional resources. \\n\\nWe are also pleased that the referee considers the proposed improvements to make a lot of sense.\\n \\n*\\\"Overall I think this is a very nice paper, but I have a few minor concerns and points of improvement:*\\n\\n*The degree mixing set (3.1.3) is a minimum spanning tree that minimizes a certain computational cost. This makes some sense, but it is not clear to me that this approach is optimal in any meaningful sense or necessary at all. I have personally experimented with sparse channel connectivity in planar CNNs, and found that it does not seem to matter much how exactly the channels are connected, with the main factor determining compute/accuracy being the number of connections. Full degree mixing does seem desirable, but this implies the need of a MST only if one wishes to use the same connectivity structure in multiple layers. An interesting baseline would be to do the degree mixing using a random pattern in each layer, with various sparsity levels. It may turn out that only the sparsity level but not the precise connectivity structure matters in practice. Such a finding would not diminish the paper's significance.\\\"*\\n\\nThe referee is correct that the MST-based subsets are not optimal in any theoretical sense. We have refereed to the degree mixing sets as \\\"optimized\\\" but appreciate that the distinction may not be clear and so we have made a number of revisions to clarify that various subsetting policies are possible and that the one we present is merely one we that found to be particularly cost-effective. However, we believe in this case strong performance derives from more than the number of connections being preserved. When experimenting we rarely suffered any noticable drop off in performance when reducing from the full sets to the much reduced MST-based sets, which is why we focus on those sets and use them for experiments. We have added a comment in 4.1 stating the result we achieved on MNIST when using full sets in the R/R setup.\\n\\nOther subsetting policies we tried also gave reasonable performance. As an example, in response to the referee's comment we ran an experiment whereby random subsets of the same size as the MST-based ones are used and this yielded accuracy of 99.27 on the MNIST R/R mode, compared to the state-of-the-art 99.38 achieved by the MST-based sets. For sets of a fixed size the MST-approach selects particularly low-cost ones and preserves performance in a way other sets do not.\"}",
"{\"title\": \"Response to comments - 2/2\", \"comment\": \"*\\\"Second, the experiments are somehow limited. The authors only test the proposed convolution operations on a single model, and the model size is different from the baselines except for the MNIST experiment. It is unclear why the model sizes are not tied in the experiments. A more informative experiment will be comparing different methods over different model sizes.\\\"*\\n\\nWe purposefully selected a similar base architecture for all experiments to demonstrate that the architecture was not highly tuned to each problem but that the underlying general architecture worked well across all benchmark problems considered. Nevertheless, the exact architecture for each problem does vary to some extent between experiments, as the varying model sizes (i.e. number of parameters) shown in the tables demonstrates. We considered a varying number of parameters for a number of experiments to demonstrate the improved parameter efficiency of our approach, i.e. that we are able to achieve superior accuracy and parameter-efficiency simultaneously.\\n\\n*\\\"Also, while the main contribution of this work is to reduce the time complexity of the convolution operation, the experiments do not show the comparison in run time. The authors should also try to evaluate the model efficiency as well as memory overhead, as these are also important factors that limit the usage of spherical convolution operation.\\\"*\\n\\nWe thank the referee for the suggestion. We agree that quantifying the implication of the reduced complexity on flop count and memory overhead is useful for readers. We have therefore included a brief discussion in a new Section 3.1.4, which refers to new comparisons presented in greater detail in Appendix F. As expected, these new quantitative comparisons of computational cost and memory requirements demonstrate the considerable savings provided by our approaches. We thank the referee once more for making this suggestion, which we hope has resulted in a marked improvement in the paper.\"}",
"{\"title\": \"Response to comments - 1/2\", \"comment\": \"We thank the referee for their comments. We respond to each comment in turn below. The referee's original comments are italicized, while our responses are given in roman font. All revisions to the manuscript are highlighted in red.\\n\\n*\\\"This paper introduces a generalized spherical convolution operation that is strictly equivariant to rotation. The authors show that the spherical convolution operations introduced in prior works can be encompassed by the proposed approach. Because spherical convolutions introduce significant computational overhead, the authors also introduce an array of methods that reduce the computational cost while maintaining the model accuracy. Experiment results on multiple benchmark datasets show that the proposed approach outperforms the alternative approaches while having less number of parameters.\\\"*\\n\\n*\\\"This paper studies an important problem. In particular, it addresses an important issue in spherical convolution operation, i.e. the computational cost of the operation. The proposed operation has the desirable property of strict rotational invariance, and it is general enough to replace existing spherical convolution operators and may be used as the basic component for CNN on spherical signals. The experiment results also verify the benefit of the proposed method.\\\"*\\n\\nWe thank the referee in particular for their accurate and concise summary of our work, and for recognizing the importance of the problem we set out to address and the value of the contributions we propose.\\n\\n*\\\"On the other hand, there are several aspects on which the paper may be improved. First of all, there are some designs in the proposed method that are not carefully discussed or tested:*\\n\\nWe thank the referee for their considered criticisms.\\n\\n*1. While the authors use tensor-product to replace pointwise activation, it is unclear what's the relation between these two operations. Is tensor-product equivalent, more or less expressive than pointwise activation? Given that activation plays an important role in neural network, the authors should try to provide more information about the new activation function.\\\"*\\n\\nThe biggest difference between these non-linear activations is the fact that the tensor-product is strictly equivariant, while pointwise activations on the sphere are not. This is the primary motivation to consider generalized signals and we have now emphasized this point in the final paragraph of Section 2.5. Primarily we show this mathematically, although we also present corroborating numerical experiments that are discussed briefly in Section 2.5 and in greater detail in Appendix D. We also highlight that tensor-product operators have been considered in neural networks previously by Thomas et al. (2018) and Kondor et al. (2018); the latter specifically for a non-linear activation function.\\n\\nTo provide some further intuition, there are connections between the tensor-product activation and the activation that would correspond to obtaining a sample-based representation and applying the function $f(x)=x^2$ pointwise before returning to a harmonic-based representation. This would correspond to proceeding the tensor-product activation with a specific down-projection. We instead make the down-projection learnable in the first step of our constrained convolution and therefore the activation is more general. We have added a new Appendix E detailing this relationship to pointwise squaring.\\n\\n*\\\"2. The authors propose channel wise activation and degree mixing to reduce the computational cost. However, they also reduce the expressiveness of the model. Therefore, it is worthwhile to provide some study on how they impact the performance of the model. For example, what will the model performance be if these methods are not applied?\\\"*\\n\\nThe benchmark problems considered in Section 4 and the direct comparisons to Kondor et al. (2018) provide precisely the analysis the referee suggests. Our improved results compared to Kondor et al. (2018), which is typically quite substantial both in terms of accuracy and parameter efficiency, show that by making the restrictions we propose it is actually possible to define and train far more expressive models than is otherwise possible. Nevertheless, in Section 4.1 results comparing the difference in MNIST classification accuracy when using MST-based degree mixing sets relative to reduced MST-based sets are provided. We now comment also that performance when using full mixing sets is typically very similar to when using the MST-based sets (justifying the reduction).\"}",
"{\"title\": \"Response to comments - 2/2\", \"comment\": \"*\\\"Provide additional feedback with the aim to improve the paper. Perhaps state that \\\\mathcal{H} is the space of spherical signals and the superscript indicates the layer The notation is a bit difficult to follow (and read since quite a bit is inline) and often is not explained, for example it could be helpful to say that L^2(S^2) are the square integrable functions on the sphere and show what that means. I think the paper would be easier to read if the language was consistent, for example, in the introduction the language of real and harmonic space is used and in section 2 it seems to change to real and Fourier space. I wonder if spatial and spectral are good words to use in place of these. (paragraph below eqn 3) remove (w.r.t) In part because the authors attempt to describe [1,2] and [3] at the same time and because of the abundance of long inline equations, the mathematical presentation is difficult to follow Moreover, the mathematics are not trivial and not particularly well known, perhaps providing intuition along with the equations would improve readability\\\"*\\n\\nWe thank the reviewer for many useful suggestions, which we have taken into account.\\n\\nIn general we have added explanations for mathematical concepts in words (rather than through equations only), made greater use of standalone equations (rather than inline equations), added additional details and descriptions, and generally attempted to improve the readability of the paper throughout. We have clarified that superscripts index layers in Section 2.0. We now define $L^2(\\\\Omega)$ as the space of square integrable functions. We have ensured consistent usage of 'Fourier' and 'harmonic' for representations (adopting 'harmonic' throughout since this is most common in the literature on harmonic analysis on the sphere). We have added references to additional resources (e.g. textbooks and review articles) at the beginning of Section 2 that provide greater detail on the mathematical background and (as mentioned above) reduced our usage of inline equations.\\n\\n*\\\"Possible typos: (Conclusion) powerful hybrid model \\u2192 powerful hybrid models (Introduction) Many fields involve \\u2192 many fields use\\\"*\\n\\nWe apologize for the typo in the conclusion which we have now fixed. We have retained the use of 'involve' in the introduction since we believe this best captures the variety of ways data may be considered across fields.\"}",
"{\"title\": \"Response to comments - 1/2\", \"comment\": \"We thank the referee for their comments. We respond to each comment in turn below. The referee's original comments are italicized, while our responses are given in roman font. All revisions to the manuscript are highlighted in red.\\n\\n*\\\"Summarize what the paper claims to contribute. The authors claim to introduce an efficient alternative to previous Spherical CNN models\\\"*\\n\\n*\\\"Strengths: The authors consider the problem of spherical image processing using convolution The authors present strong empirical results\\\"*\\n\\n*\\\"Weaknesses: Both the mathematical presentation and discussion are difficult to follow*\\\"\\n\\n*\\\"Clearly state your recommendation (accept or reject) and justification. Reject. It seems the authors have given considerable attention to the problem and produced compelling results; however, for me, the mathematical presentation and discussion are difficult to follow which I expect will make it difficult for readers to understand and build upon what has been done. My impression is that some of the difficulty could be resolved with more standard notational choices (e.g. nonlinearities are not often written \\\\mathcal{N}_\\\\otimes), and limiting the use of inline equations.\\\"*\\n\\nWe have taken on board the referee's comments regarding readability and have made numerous revisions throughout the paper in order to address this. In particular we have added explanations for mathematical concepts in words (rather than through equations only), made greater use of standalone equations (rather than inline equations), added additional details and descriptions, and generally attempted to improve the readability of the paper throughout.\\n\\nWe appreciate that some of our notational choices are non-standard but they are consistent and best allow us to precisely describe our contributions. For example we realize that $\\\\mathcal{N}$ would not usually be used to represent non-linearity, however we are describing a non-conventional case where the non-linearity is introduced through an operator rather than a function acting pointwise and our hope is that our consistent usage of calligraphic script for operators makes this distinction clear.\\n\\n*\\\"Ask questions you would like answered by the authors to help you clarify your understanding of the paper and provide the additional evidence you need to be confident in your assessment. (first paragraph 2.1) Does the operator map spherical signals to signals on SO(3)? The mathematical presentation is given with filters and functions in \\\\mathbb{C}, is reflected in the implementation? It is unclear to me what the authors mean (specifically) by a hybrid approach\\\"*\\n\\nHere $\\\\mathcal{A}$ can indeed refer to an operator mapping signals on the sphere onto those on the rotation group SO(3) (and indeed this is a case of particular interest) but here we are presenting the more general case where $\\\\mathcal{A}$ could take different forms. For example, it could instead be an operator mapping signals from the sphere to the sphere, the rotation group to the rotation group, or from the rotation group to the sphere. To keep the description concise, we typically present general expressions and then specialize to specific changes when necessary or insightful. \\n \\nWhilst data of interest are typically real-valued signals on the sphere, their harmonic representations consist of complex coefficients and this is indeed reflected in the implementation.\\n\\nVarious existing spherical CNN constructions have been suggested that repeatedly apply the same layer. By a hybrid approach we mean one in which different types of layers are applied within a single model. We have clarified this in Section 2.6.\"}",
"{\"title\": \"Response to comments - 2/2\", \"comment\": \"*\\\"Group convolution has an adverse effect on performance on standard CNNs. I am not sure about the validity of the following statement: \\\"restricted N_b in which only a subset of P_L is used for each degree ` still defines a strictly equivariant operator\\\". Though it makes intutive sense, is there a proof of the same?\\\"*\\n\\nBecause rotation of generalized signals is defined at the fragment level and each individual fragment responds in the desired way to rotations of the input, it follows that any collection of such fragments also does. The equations detailing the fragment-level response to rotations are no longer inline and can now be found in Equations (7)-(9). We hope this serves to both highlight the importance and aide understanding of this important point.\\n\\n*\\\"The results section leaves a lot to be desired. In table 3, we see that it is not state of the art on several metrics.\\\"*\\n\\nFor SHREC'17 it is indeed true that we do not achieve the state-of-the-art in all metrics but that we do in 3 of 5 accuracy metrics and in the number of parameters. Therefore we achieve the state-of-the-art in 4 of 6 metrics in total. The next best approach of Esteves et al. (2018) achieves the state-of-the-art in only 2 of 6 metrics (note that results are not quoted in Esteves et al. 2018 for the missing metrics in Table 3). Since we perform best across 4 of 6 metrics we believe it is fair to say that overall we present the state-of-the-art.\\n\\n*\\\"The authors do not compare with the improved Esteves et. al. paper from 2020. This result should be added.\\\"*\\n\\nWe do indeed compare to Esteves et al. (2020) in all of the experiments in common, which includes only MNIST as shown in Table 1 (the QM7 and SHREC'17 benchmark problems are not considered in Esteves et al. 2020).\\n\\n*\\\"What is the implication of the CL logL complexity. This should translate to reduced flop count, but there is no discussion on flop or timing of this approach anywhere in the paper. This makes me skeptical whether the proposed approach improves efficiency in practice.\\\"*\\n\\nWe thank the referee for the suggestion. We agree that quantifying the implication of the reduced complexity on flop count and memory overhead is useful for readers. We have therefore included a brief discussion in a new Section 3.1.4, which refers to new comparisons presented in greater detail in Appendix F. As expected, these new quantitative comparisons of computational cost and memory requirements demonstrate the considerable savings provided by our approaches. We thank the referee once more for making this suggestion, which we hope has resulted in a marked improvement in the paper.\\n \\n*\\\"There are many typos, incomplete sentences, long hard to read sentences etc. I would recommend the authors to compare on all benchmarks provided in Esteves et. al. (2020).\\\"*\\n\\nWe apologize for any typos and have done our best to eradicate these for the revised version. If the referee notices any further typos or sentences that are not clear please do let us know and we will correct them.\\n\\nAs discussed above, we did already compare all benchmarks that are in common with Esteves et al. (2020).\\n\\n*\\\"Overall, the paper is a very hard read which no clear message on the key contributions. It seems that the MST approach is driving the efficiency (without proof?), coupled with group convolutions which is well known in literature and cannot be credited as a contribution. This coupled with the marginal improvement on select datasets and incomplete evaluation does not inspire acceptance.\\\"*\\n\\nWe hope the additional clarifications and explanations added throughout the paper, as well as the additional resources that we have now referenced including textbooks and review articles (as discussed above), improve the readability of the paper. Referee 2 acknowledges that the paper may indeed be difficult to parse for those not familiar with a background in harmonic analysis, and we hope the above additions help such readers, but we are encouraged to see that Referee 2 nevertheless finds the paper \\\"very well written\\\" and \\\"very readable\\\".\\n\\nWe believe there are many problems involving spherical data whereby the only suitable approach allowing the practitioner to leverage strictly equivariant spherical CNNs on their problem is by using the layers introduced in this paper. We believe this to be a significant contribution, particularly given many problems require strict rotational equivariance for predictions to be physically meaningful.\\n\\nRegarding numerical experiments, we believe the improvements shown are not marginal since we typically reduce the number of parameters by a factor of two or greater, we achieve the state-of-the-art on all experiments considered, in some cases making a considerable improvement over all existing prior art (e.g. for the QM7 problem) and perform a complete evaluational (we do compare to Esteves et al. 2020 for all problems in common).\"}",
"{\"title\": \"Response to comments - 1/2\", \"comment\": \"We thank the referee for their comments. We respond to each comment in turn below. The referee's original comments are italicized, while our responses are given in roman font. All revisions to the manuscript are highlighted in red.\\n\\n*\\\"The authors introduce channel-wise convolutions, and an optimized degree mixing set in order to construct equivariant layers that exhibit improved scaling properties and parameter efficiency on some prototypical spherical CNN tasks.*\\n\\n*\\\"Section 2 is unnecessarily math heavy with representations and terminologies introduced which are not relevant to the central claims in the paper and not reused in latter sections. The authors should pick out the essentials bits and place the rest of the technical bits to the supplement. The gained space should be used to expand and better explain section 3 which is extremely hard to understand.\\\"*\\n\\nWe have taken on board the referee's comments regarding readability and have made numerous revisions throughout the paper in order to address this. In particular we have added explanations for mathematical concepts in words (rather than through equations only), made greater use of standalone equations (rather than inline equations), added additional details and descriptions, and generally attempted to improve the readability of the paper throughout.\\n\\nWe also appreciate that the paper may not be straightforward to parse for readers not familiar with the background mathematics of computational harmonic analysis. We have therefore added references to additional resources (e.g. textbooks and review articles) at the beginning of Section 2 that provide greater detail on the mathematical background. We appreciate that our paper relies on considerable mathematical background but are encouraged to see that Referee 2 comments that for those familiar with such background material the paper is \\\"very well written\\\" and \\\"very readable\\\".\\n\\nIn terms of Section 2 specifically, which the referee highlights, we agree a lot of technical details are given. However, these details are essential to the paper to ensure it is complete and as self-contained as possible. These details are critical to the central contributions of the paper and are used throughout Section 3 since the precise descriptions of our contributions of Section 3 build directly on the material presented in Section 2. While Referee 2 finds the paper readable, we appreciate that many readers will not have a high degree of familiarity with the extensive mathematical background and we have therefore made a number of minor revisions throughout Section 2 in an attempt to clarify the relevance of all of the theory introduced. \\n\\n*\\\"Reading and re-reading Section 3 several times, I am still lost as to what figure 1 is trying to demonstrate.\\\"*\\n\\nWe hope that our revisions to Section 2 also help to make the contributions in Section 3 more clear. Figure 1 is an attempt to visualize the drastic expansion in representation size due to the tensor-product activation, comparing the prior approach with the channel-wise approach that we propose. We have revised the figure caption to hopefully make this clearer to readers.\\n\\n*\\\"I do not understand what are the trade-offs involved with the constrained generalized convolution as opposed to generalized convolution. From the results it seems that it does not matter, which is counter intutitive.\\\"*\\n\\nThe very large representation resulting from the tensor-product activation is merely a necessary evil for introducing non-linearity, and is in no way desirable. Ideally we'd like the operator to be size-neutral (as is the case for typical pointwise activations). One could consider incorporating into the non-linear operator a proceeding non-learnable projection to make the overall operator size-neutral (in fact the harmonic implementation of pointwise squaring takes this form, as now detailed in an additional Appendix E). Performance would still be reasonable and note how this non-learnable projection would also be uniform across channels. We instead choose the more general approach of allowing the down-projection to be learnable, but the interpretation as an extension of the non-linearity remains. By staying size-neutral before learning features across channels, we don't suffer the blow up in parameters that Kondor et al. (2018) experience. Constraining the convolution therefore certainly does matter. When comparing to the unconstrained convolutions performed by Kondor et al. (2018) we perform better across all experiments by significant margins.\"}",
"{\"title\": \"Hard to understand paper with marginal performance improvement. Insufficient experiments.\", \"review\": \"The authors introduce channel-wise convolutions, and an optimized degree mixing set in order to construct equivariant layers that exhibit improved scaling properties and parameter efficiency on some prototypical spherical CNN tasks.\\n\\nSection 2 is unnecessarily math heavy with representations and terminologies introduced which are not relevant to the central claims in the paper and not reused in latter sections. The authors should pick out the essentials bits and place the rest of the technical bits to the supplement. The gained space should be used to expand and better explain section 3 which is extremely hard to understand. \\n\\nReading and re-reading Section 3 several times, I am still lost as to what figure 1 is trying to demonstrate. I do not understand what are the trade-offs involved with the constrained generalized convolution as opposed to generalized convolution. From the results it seems that it does not matter, which is counter intutitive. Group convolution has an adverse effect on performance on standard CNNs. I am not sure about the validity of the following statement: \\\"restricted N_b in which only a subset of P_L is used for each degree ` still defines a strictly equivariant operator\\\". Though it makes intutive sense, is there a proof of the same? \\n\\nThe results section leaves a lot to be desired. In table 3, we see that it is not state of the art on several metrics. The authors do not compare with the improved Esteves et. al. paper from 2020. This result should be added. What is the implication of the CL logL complexity. This should translate to reduced flop count, but there is no discussion on flop or timing of this approach anywhere in the paper. This makes me skeptical whether the proposed approach improves efficiency in practice. There are many typos, incomplete sentences, long hard to read sentences etc. I would recommend the authors to compare on all benchmarks provided in Esteves et. al. (2020). \\n\\nOverall, the paper is a very hard read which no clear message on the key contributions. It seems that the MST approach is driving the efficiency (without proof?), coupled with group convolutions which is well known in leterature and cannot be credited as a contribution. This coupled with the marginal improvement on select datasets and incomplete evaluation does not inspire acceptance.\", \"post_rebuttal_comment\": \"Having read the reviews from other reviewers who are subject matter experts, and the authors rebuttal which helped clarify most of my concerns, I am increasing my rating for the paper. I recommend acceptance as two of the reviewers are convinced about the positive impact of the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Difficult to follow\", \"review\": \"**Summarize what the paper claims to contribute.**\\nThe authors claim to introduce an efficient alternative to previous Spherical CNN models\\n\\n**Strengths:**\\nThe authors consider the problem of spherical image processing using convolution\\nThe authors present strong empirical results\\n\\n**Weaknesses:**\\nBoth the mathematical presentation and discussion are difficult to follow\\n\\n**Clearly state your recommendation (accept or reject) and justification.**\\nReject. It seems the authors have given considerable attention to the problem and produced compelling results; however, for me, the mathematical presentation and discussion are difficult to follow which I expect will make it difficult for readers to understand and build upon what has been done. My impression is that some of the difficulty could be resolved with more standard notational choices (e.g. nonlinearities are not often written \\\\mathcal{N}_\\\\otimes), and limiting the use of inline equations.\\n\\n**Ask questions you would like answered by the authors to help you clarify your understanding of the paper and provide the additional evidence you need to be confident in your assessment.**\\n(first paragraph 2.1) Does the operator map spherical signals to signals on SO(3)?\\nThe mathematical presentation is given with filters and functions in \\\\mathbb{C}, is reflected in the implementation?\\nIt is unclear to me what the authors mean (specifically) by a hybrid approach\\n\\n**Provide additional feedback with the aim to improve the paper.**\\nPerhaps state that \\\\mathcal{H} is the space of spherical signals and the superscript indicates the layer\\nThe notation is a bit difficult to follow (and read since quite a bit is inline) and often is not explained, for example it could be helpful to say that L^2(S^2) are the square integrable functions on the sphere and show what that means.\\nI think the paper would be easier to read if the language was consistent, for example, in the introduction the language of real and harmonic space is used and in section 2 it seems to change to real and Fourier space. I wonder if spatial and spectral are good words to use in place of these.\\n(paragraph below eqn 3) remove (w.r.t)\\nIn part because the authors attempt to describe [1,2] and [3] at the same time and because of the abundance of long inline equations, the mathematical presentation is difficult to follow\\nMoreover, the mathematics are not trivial and not particularly well known, perhaps providing intuition along with the equations would improve readability\\n\\n**Possible typos:**\\n(Conclusion) powerful hybrid model \\u2192 powerful hybrid models\\n(Introduction) Many fields involve \\u2192 many fields use\\n\\n**Post rebuttal**\\nWith consideration of the improved readability of the new submission and comments of other reviewers, I have modified both my initial rating and confidence.\\n\\n[1] Kondor, Risi, Zhen Lin, and Shubhendu Trivedi. \\\"Clebsch\\u2013gordan nets: a fully fourier space spherical convolutional neural network.\\\" Advances in Neural Information Processing Systems. 2018.\\n[2] Taco Cohen, Mario Geiger, Jonas K \\u0308ohler, and Max Welling. Spherical CNNs. InInternationalConference on Learning Representations, 2018\\n[3] Carlos Esteves, Christine Allen-Blanchette, Ameesh Makadia, and Kostas Daniilidis. LearningSO(3) equivariant representations with spherical CNNs. InProceedings of the European Con-ference on Computer Vision (ECCV), pp. 52\\u201368, 2018\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"This paper addresses an important problem in spherical convolution, i.e. the computational cost, and proposes a series of approach to reduce the time complexity.\", \"review\": \"This paper introduces a generalized spherical convolution operation that is strictly equivariant to rotation. The authors show that the spherical convolution operations introduced in prior works can be encompassed by the proposed approach. Because spherical convolutions introduce significant computational overhead, the authors also introduce an array of methods that reduce the computational cost while maintaining the model accuracy. Experiment results on multiple benchmark datasets show that the proposed approach outperforms the alternative approaches while having less number of parameters.\\n\\nThis paper studies an important problem. In particular, it addresses an important issue in spherical convolution operation, i.e. the computational cost of the operation. The proposed operation has the desirable property of strict rotational invariance, and it is general enough to replace existing spherical convolution operators and may be used as the basic component for CNN on spherical signals. The experiment results also verify the benefit of the proposed method.\\n\\nOn the other hand, there are several aspects on which the paper may be improved. First of all, there are some designs in the proposed method that are not carefully discussed or tested:\\n1) While the authors use tensor-product to replace pointwise activation, it is unclear what's the relation between these two operations. Is tensor-product equivalent, more or less expressive than pointwise activation? Given that activation plays an important role in neural network, the authors should try to provide more information about the new activation function.\\n2) The authors propose channel wise activation and degree mixing to reduce the computational cost. However, they also reduce the expressiveness of the model. Therefore, it is worthwhile to provide some study on how they impact the performance of the model. For example, what will the model performance be if these methods are not applied?\\n\\nSecond, the experiments are somehow limited. The authors only test the proposed convolution operations on a single model, and the model size is different from the baselines except for the MNIST experiment. It is unclear why the model sizes are not tied in the experiments. A more informative experiment will be comparing different methods over different model sizes. Also, while the main contribution of this work is to reduce the time complexity of the convolution operation, the experiments do not show the comparison in run time. The authors should also try to evaluate the model efficiency as well as memory overhead, as these are also important factors that limit the usage of spherical convolution operation.\\n\\nThe rebuttal provides valuable information that was missing in the original paper and improves the readability. Therefore, I recommend accepting the paper.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"The paper introduces a framework for computationally efficient and exactly rotation-equivariant spherical CNNs. The work most closely resembles the Fourier space method of Kondor et al., but improves on it in a number of ways: firstly, a channel-wise structure is introduced for the tensor product nonlinearities, which avoids the degree blowup of this operation while still allowing mixing between different harmonic degrees. Secondly, computational complexity of linear layers is reduced by factorizing it into three operators, two of which operate similar to depthwise-separable convolutions and one of which acts uniformly across channels. Thirdly, an optimized sparse degree mixing set is proposed, based on a minimum spanning tree. Finally, a more efficient sampling theorem is used that reduces the Nyquist rate by a factor of two compared to the ones used in previous works on spherical CNNs.\\n\\nThe paper is very well written and the authors clearly have a thorough understanding of the noncommutative harmonic analysis involved. This does not mean the paper will be easy to understand for all readers, but for those familiar with the relevant mathematics, either from textbooks or earlier works in the spherical CNN literature, the paper is very readable. The proposed improvements make a lot of sense to me, and their computational complexity improvements are clearly stated. The performance of a network architecture that includes the new layers is tested and shown to yield competitive or state of the art performance on several benchmark problems that have been used in many previous works.\\n\\nOverall I think this is a very nice paper, but I have a few minor concerns and points of improvement:\\n\\nThe degree mixing set (3.1.3) is a minimum spanning tree that minimizes a certain computational cost. This makes some sense, but it is not clear to me that this approach is optimal in any meaningful sense or necessary at all. I have personally experimented with sparse channel connectivity in planar CNNs, and found that it does not seem to matter much how exactly the channels are connected, with the main factor determining compute/accuracy being the number of connections. Full degree mixing does seem desirable, but this implies the need of a MST only if one wishes to use the same connectivity structure in multiple layers. An interesting baseline would be to do the degree mixing using a random pattern in each layer, with various sparsity levels. It may turn out that only the sparsity level but not the precise connectivity structure matters in practice. Such a finding would not diminish the paper's significance.\\n\\nIt should be clarified in the paper that full mixing happens only across several layers (as many as the maximum path length / tree width in the MST). The question then arises whether full mixing actually happens in the considered architecture, given that it is not very deep.\\n\\nIt would be interesting to see actual implementation details in some DL framework, as well as wallclock timings. Also, code would be much appreciated.\\n\\nThe appendix describes a method for enforcing spatial localization of the spectral filters, but it is not clear from the paper if/how this is actually used in the network architecture that is tested.\\n\\nIt would be nice to know why the initial convolution layers are necessary, instead of just using the generalized layers introduced in this paper in their full glory.\\n\\nI may have missed it, but could not figure out what L_G^(psi) refers to in 2.6.\", \"post_rebuttal_update\": \"I have read the author response and updated paper, as well as the other reviews, and have decided to maintain my rating.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
XPZIaotutsD | DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION | [
"Pengcheng He",
"Xiaodong Liu",
"Jianfeng Gao",
"Weizhu Chen"
] | Recent progress in pre-trained neural language models has significantly improved the performance of many natural language processing (NLP) tasks. In this paper we propose a new model architecture DeBERTa (Decoding-enhanced BERT with disentangled attention) that improves the BERT and RoBERTa models using two novel techniques. The first is the disentangled attention mechanism, where each word is represented using two vectors that encode its content and position, respectively, and the attention weights among words are computed using disentangled matrices on their contents and relative positions, respectively. Second, an enhanced mask decoder is used to incorporate absolute positions in the decoding layer to predict the masked tokens in model pre-training. In addition, a new virtual adversarial training method is used for fine-tuning to improve models’
generalization. We show that these techniques significantly improve the efficiency of model pre-training and the performance of both natural language understand(NLU) and natural langauge generation (NLG) downstream tasks. Compared to RoBERTa-Large, a DeBERTa model trained on half of the training data performs consistently better on a wide range of NLP tasks, achieving improvements on MNLI by +0.9% (90.2% vs. 91.1%), on SQuAD v2.0 by +2.3% (88.4% vs. 90.7%) and RACE by +3.6% (83.2% vs. 86.8%). Notably, we scale up DeBERTa by training a larger version that consists of 48 Transform layers with 1.5 billion parameters. The significant performance boost makes the single DeBERTa model surpass the human performance on the SuperGLUE benchmark (Wang et al., 2019a) for the first time in terms of macro-average score (89.9 versus 89.8), and the ensemble DeBERTa model sits atop the SuperGLUE leaderboard as of January 6, 2021, outperforming the human baseline by a decent margin (90.3 versus
89.8). The pre-trained DeBERTa models and the source code were released at: https://github.com/microsoft/DeBERTa.
| [
"Transformer",
"Attention",
"Natural Language Processing",
"Language Model Pre-training",
"Position Encoding"
] | Accept (Poster) | https://openreview.net/pdf?id=XPZIaotutsD | https://openreview.net/forum?id=XPZIaotutsD | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"B3gxsldkgCG",
"v2SPm7iJIoK",
"x_SYovaK2DM",
"i5sVBc3PWrr",
"TgVEle3IJA_",
"a5oUDreFvx",
"yIugoHlwxmk",
"hbRWP5lM16H",
"WYwTmQDzGb3",
"tM7jxs-mDwp",
"tAB4kErmoV"
],
"note_type": [
"comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1612564757944,
1610980841619,
1610040512454,
1606203689165,
1606203594479,
1606203447856,
1606202878151,
1604033484858,
1603879332972,
1603875540644,
1603841489129
],
"note_signatures": [
[
"~Pengcheng_He2"
],
[
"~Jonathan_Pilault1"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3690/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3690/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3690/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3690/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3690/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3690/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3690/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3690/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"RE: SiFT (virtual adversarial training) mentioned in your arXiv version but not in the ICLR article: Was it used?\", \"comment\": \"Thanks for your interest in our work. SiFT is only used with 1.5B model in SuperGLUE tasks in the paper. We will clarify this in our updated version.\"}",
"{\"title\": \"SiFT (virtual adversarial training) mentioned in your arXiv version but not in the ICLR article: Was it used?\", \"comment\": \"Very interesting work.\\n\\nIn your arXiv version [1], SiFT (perturbations to the normalized word embeddings) was used. You wrote that \\\"we find that the normalization substantially improves the performance of the fine-tuned models\\\". Since the results of your arXiv version and the ICLR openreview version are the same for $DeBERTa_{large}$, I was wondering if you had used SiFT here. If so, could you discuss the performance differences between $DeBERTa_{large}$ with and without SiFT.\\n\\nThank you!\\n\\n[1] https://arxiv.org/pdf/2006.03654.pdf\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"All reviewers gave, though not very strong, positive scores for this work. Although the technical contribution of the paper is somewhat incremental, the reviewers agree that it solidly addresses the known important issues in BERT, and the experiments are extensive enough to demonstrate the empirical effectiveness of the method. The main concerns raised by the reviewers are regarding the novelty and the discussion with respect to related work as well as some unclear writings in the detail, but I think the pros outweigh the cons and thus would like to recommend acceptance of the paper.\\n\\nWe do encourage authors to properly take in the reviewers' comments to further polish the paper in the final version.\"}",
"{\"title\": \"Author Response to Reviewer 4\", \"comment\": \"We would like to thank reviewer 4 for the detailed comments. Below we try to respond to the feedbacks mentioned in the review:\\n\\n**About additional parameters**. We will clarify this in the main paper. The additional parameters in our original model are due to the projection matrix of relative position embedding. We also perform a new model design via sharing the parameters of the two projection matrices, which makes the number of model parameters close to BERT or RoBERTa, without sacrificing the accuracy. We report the experimental results in Table 11 in the Appendix.\\n\\n**About the experiment result description**. These are great feedbacks and we will describe the comparison more precisely in the new version, especially in the comparison with ALBERT. First, we will add the ALBERT result into Table.2. Table 1 will focus on comparing models similar to BERT-large structure, i.e., 24 layers with 1024 dimensions and compare more SOTA models in Table.2, including ALBERT-XXLarge. Second, we will clarify the excellent design of the parameter sharing introduced by ALBERT, which can significantly reduce the model size although the computation cost is still determined by the model structure, i.e. number of hidden dimensions and transformer layers. As is reported in the ALBERT paper, the data-throughput of BERT-Large is about 3.17x higher compared to ALBERT-XXLarge. We agree ALBERT-XXLargeand DeBERTa-large are comparable in terms of accuracy, with ALBERT-XXLarge having less model parameters and DeBERTa being trained more efficiently as shown in Figure 1. \\n\\nMeanwhile, we will make the change to 4B training samples and fix the references with their latest updates.\"}",
"{\"title\": \"Author Response to Reviewer 3\", \"comment\": \"We appreciate the review and thank reviewer 3 for the thoughtful feedback.\\n \\n**About the incremental of previous methods**. We agree that our approach is an extension to previous methods. Besides a more comprehensive way to capture both the content and position, the main contribution of this paper is a detailed empirical study to demonstrate that the two proposed techniques (Disentangled attention and EMD) are simple and effective.\\n\\n**About statistical significance of the improvements**. We perform a t-test on the MNLI and SQuAD V1.1 between DeBERTa and RoBERTa on their base models. The p-value on both datasets is less than 0.05. More details are provided in one of the responses to the reviewer 2 above. \\n\\n**About the perplexity**. We follow previous work such as XLNet to report the perplexity. But we will add a new generation task of next-word-prediction in the new version, as a complement to perplexity in the generation tasks. \\n\\nMeanwhile, we will incorporate other great feedbacks and fix them in our next version, including the notation in section 4.1.1, acronyms, and some redundancy in text.\"}",
"{\"title\": \"Author Response to Reviewer 2\", \"comment\": \"Thank you for the positive review. We provide the answer to the questions and potential concerns.\\n\\n**Q1**: The disentangled attention in DeBERTa is motivated but not closely related to disentangled representations or features. Unlike the conventional absolute position bias encoding which adds the position embedding into content embedding directly, we borrow the idea of disentangled representations to decompose the attention score into different parts to avoid the interference between content and position, as well as fully capture the interaction between content and relative positions, and add the absolute position embedding back into the EMD layer in DeBERTa. We will add a footnote in new version to clarify this. \\n\\n**Q2**: About the difference between EMD and masked language model (MLM), we add absolute position encoding at the last layer in EMD to address the limitation of relative position encoding on MLM, which is demonstrated by the example in section 3.2 with an ablation study in Table.5.\\n\\n**Q3**: Following previous works such as BERT and RoBERTa, in our experiments, we simply use the conventional way to initialize our model (parameter matrices) using normal distribution N(0, 0.02). How the initialization affects the model performance is an open research topic beyond the scope of this paper. We agree that it could be an important research topic for the future work. \\n\\n**About the variance with multiple runs**. Following BERT and RoBERTa, our reported numbers are based the average on 5 runs with different random initialization seed. Here are the results with min, max, average, and a t-test on MNLI, SQUAD for base model, as a complement to the Table.5 in the paper. \\n\\n---------------------------------------------------------------------------------------------------------------------------------------\\n\\n|\\t |DeBERTa base(Min/max/avg)\\t| RoBERTa-ReImp base(Min/max/avg) |p-value of t-tests |\\n|:---------------------------------------------:|:-------------------------------------------------:|:-------------------------:|:------------------------------:|\\n| MNLI-matched(Acc)\\t |86.1/86.5/86.3 |\\t84.7/85.0/84.9\\t| 0.02 |\\n| SQUADv1.1(F1)\\t | 91.8/92.2/92.1\\t| 90.8/91.3/91.1|\\t0.01 |\\n---------------------------------------------------------------------------------------------------------------------------------------\\n\\nIn our paper all the improvements that we claimed statistically significant are based on statistically significant tests with p-value < 0.05.\"}",
"{\"title\": \"Author Response to Reviewer 1\", \"comment\": \"We would like to thank reviewer 1 for the thoughtful comments and suggestions. Below we address the concerns mentioned in the review:\\n\\n**Difference with Transformer-XL**. We will clarify the commonness and difference in the next version. The relative position in DeBERTa is an extension of that in Transformer-XL and XLNet, with a different motivation and implementation. First, in Transformer-XL/XLNet, the relative position is introduced to solve the position dependencies between tokens among different segments. In DeBERTa, the motivation is to decompose position information from content information thoroughly. Second, DeBERTa separates the content and position in a more comprehensive way. For example, DeBERTa contains a new position-to-content component that captures the relative interaction between position and content at the attention layer. This is an important introduction in DeBERTa. As we showed in Table.5, this new component is critical in the new disentangled attention and can substantially boost the model performance in the Ablation study. Meanwhile, we did compare with XLNet in Table.5 and briefly mentioned the DeBERTa minus P2C will be reduced to XLNet plus EMD. We will make this clearer in the revision. \\n\\n**The word Disentangled**. Thanks for the suggestion. We will add a footnote in the paper to distinguish these two concepts. Our disentangled attention is motivated but not closely related to disentangled representations or features. Unlike the conventional absolute position bias encoding which adds the position embedding into content embedding directly, we borrow the idea of disentangled representations to decompose the attention score into different parts to avoid the interference between content and position, while fully capturing the interaction between content and relative positions. We will make this clear in our new version.\"}",
"{\"title\": \"Good empirical performance but requires a more careful comparisons to prior works.\", \"review\": \"The paper proposed a novel attention mechanism and a new objective function that mitigates the distribution shifts caused by masked tokens for downstream tasks in MLM. It demonstrates superior performance across benchmarks.\", \"pros\": \"1. Good empirical results are demonstrated across an extensive suite of benchmarks. Ablation studies are well done. Hence I am willing to give a score of 6 despite of the following concerns.\", \"cons\": \"1. My major concern is about the novelty of this paper. \\nIn transformer-XL[1], the idea of relative positional information in the form of Eq (2) was already introduced. The paper somehow intentionally omit the discussion following (2), only mentioning two earlier works of (Shaw et al., 2018; Huang et al., 2018). I think the author should be honest and compare with relative positional information introduced in transformer-XL in the forefront. \\nThat being said, there is obviously still differences between transformer-XL and the proposed methods. And also the introduction of novel objectives in addition to the attention mechanism. \\n\\n2. However, the previous concern brought up the second concern I have about the evaluations. Since the modification relative positional information of transformer-XL to the proposed method is not too large, I wonder if there is a reason to explain the better performances of the proposed methods. Hence I am worried if the baseline such as XLNet was well-tuned. We can see that for example in [2], the performance of XLNet was much better than originally reported. I think the author should try to carefully evaluate the relative positional mechanisms of prior works with authors' own infrastructure, while having everything else fixed.\\n\\n3. I find the word \\\"disentangled\\\" a bit misleading in this context. Disentanglement in ML [3] often refers to the ability to disentangle factors of variations of the data. The work does not make use of any disentangled techniques, or have disentanglement representation/architectures. It simply use a relative position mechanism that's the sum of four matrix products. \\n\\n[1] Dai et. al. Transformer-XL: Attentive Language Models Beyond a Fixed-Length Context\\n\\n[2] Dai et. al. Funnel-Transformer: Filtering out Sequential Redundancy for Efficient Language Processing\\n\\n[3] Locatello et. al., Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"DeBERTa: Decoding-Enhanced BERT with Disentangle Attention\", \"review\": [\"Summary and Contributions\", \"The authors proposed an extension to the word representation transformer architecture that takes into account disentangle features for position and content. The disentangle of attention is based on the composition of a content and position parameter matrices, in addition with combinations of both. The main contribution is to tackle issues with the relative position embeddings used on standard transformer architectures. The proposed model shows improvements on some benchmarks by using less pre-training data compared to the baseline.\", \"Strengths\", \"The proposed model tackles a known issue in transformer architectures.\", \"The authors perform a comprehensive comparison on standard text benchmarks as well as an ablation study.\", \"The findings show that disentangle attention improves results on some text benchmarks.\", \"Weaknesses\", \"Related work on disentangle representations for text, and the further motivation for using disentanglement into the attention model are not discussed.\", \"Missing results of the variance in metrics with multiple runs on the downstream tasks. As an extra contribution, the authors could show if the improvements are due to the proposed model or variance in parameter initialisation.\", \"Questions to the Authors\", \"Could you elaborate on disentangled representations and how they relate to the proposed attention model?\", \"How does it compare the enhanced masked language model with the masked language model?\", \"How does the relative position parameter matrix is initialised, and how does it affect the language model performance?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"In this paper, an improvement of BERT model is proposed. It relies on the disentanglement of contents and relative positions in the encoding layers and on the incorporation of absolute positions in the decoding layer.\", \"review\": \"In this paper, an improvement of BERT model is proposed. It relies on the disentanglement of contents and relative positions in the encoding layers and on the incorporation of absolute positions in the decoding layer.\", \"strengths\": [\"The paper is well written, the positioning to the state of the art is clear and the method is rigorously described.\", \"The paper provides a complete evaluation using the existing benchmarks for NLP and including ablation studies and evaluation of pre-training efficiency and Deberta improves results in the major part of the cases.\"], \"weaknesses\": [\"The proposed method is a relative increment of previous methods.\", \"In Section 4.1.1., the way performance increase or decrease is reported is not exact (1.1% -> 1.1 points)\", \"Do we have an idea of the statistical significance of the improvements?\", \"It would be interesting to have the rationale for the mitigated result obtained on Table 1. Is Deberta more relevant for specific tasks?\", \"The authors claim that they evaluate their results on generation task but it rather seems that they evaluate language modeling using perplexity.\", \"*The use of non documented acronyms (ppl, for example) that could be not understandable outside the NLP community.\", \"*They are some redundancy in the text (second paragraph of 3.2 and fourth paragraph of the introduction) that is not necessary.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION\", \"review\": \"The paper proposes a BERT-inspired model that adds a two main different architectural decisions: different content and position representations (instead of a sum), and absolute positions in the decoding layer. The authors run the standard suite of GLUE benchmark experiments, on both \\u201clarge\\u201d and \\u201cbase\\u201d setups, as well as a generation setup (Wikitext-103).\\n\\nThe modifications proposed are not game-changing, but the evaluations are interesting in terms of understanding the impact of these modifications. One thing that I find disingenuous is fact that their disentangled approach does introduce additional parameters, which is not quantified (or even mentioned) in the main paper. I had to dig into the Appendix to see that this introduces about 49M additional parameters (increment of 13%).\\n\\nAnother problem that I have is with their experimental comparisons, especially the ones in main part, Sec 4.1.1. I\\u2019m listing below the most important issues in this section:\\n\\n\\u201cRoBERTa and XLNet are trained for 500K steps with 8K samples in a step, which amounts to four billion passes over training samples\\u201d. This is confusing; what you mean to say is that the models see about four billion training examples. The term \\u201cpasses\\u201d is used usually as an equivalent to \\u201cepochs\\u201d, ie how many times the model goes over the entire training set.\\n\\n\\u201c[...] Table 1, which compares DeBERTa with previous models with around 350M parameters: BERT, RoBERTa, XLNet, ALBERT and ELECTRA.\\u201d Note that ALBERT is actually around 235M parameters, significantly less than all the others. You cannot simply bundle all together and claim they are equivalent parameter-size--wise.\\n\\n\\u201cDeBERTa still outperforms them [ALBERT_xxlarge] in term of the average \\u201cGLUE\\u201d score.\\u201d Note that the difference here wrt ALBERT_xxlarge is from 89.96 to 90.00, ie 0.04 for the average, with a tie 3-3 in terms of wins for specific tasks. Unless you can show that the 0.04 difference is statistically significant, you need to tone down the claim about \\u201coutperforming\\u201d.\\n\\n\\n\\u201cWe summarize the results in Table 2. Compared to the previous SOTA models with similar sizes, including BERT, RoBERTa, XLNet and Megatron336M, DeBERTa consistently outperforms them in all the 7 tasks. Taking RACE as an example, DeBERTa is significantly better than previous SOTA XLNet with an improvement of 1.4% (86.8% vs. 85.4%).\\u201d\\nFor whatever reason, the authors omit ALBERT from the comparison done for Table 2, in spite of its even smaller size compared to the included ones, and the fact that the ALBERT numbers for these tasks are readily available in the paper. Taking RACE as an example: ALBERT (single model) has 86.5% accuracy, therefore nullifying the claim of 1.4% improvement.\", \"re\": \"References\\n\\nA lot of the references use the Arxiv version for papers that have been peer-reviewed and published. Please fix.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
-6vS_4Kfz0 | Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning | [
"Shauharda Khadka",
"Estelle Aflalo",
"Mattias Marder",
"Avrech Ben-David",
"Santiago Miret",
"Shie Mannor",
"Tamir Hazan",
"Hanlin Tang",
"Somdeb Majumdar"
] | For deep neural network accelerators, memory movement is both energetically expensive and can bound computation. Therefore, optimal mapping of tensors to memory hierarchies is critical to performance. The growing complexity of neural networks calls for automated memory mapping instead of manual heuristic approaches; yet the search space of neural network computational graphs have previously been prohibitively large. We introduce Evolutionary Graph Reinforcement Learning (EGRL), a method designed for large search spaces, that combines graph neural networks, reinforcement learning, and evolutionary search. A set of fast, stateless policies guide the evolutionary search to improve its sample-efficiency. We train and validate our approach directly on the Intel NNP-I chip for inference. EGRL outperforms policy-gradient, evolutionary search and dynamic programming baselines on BERT, ResNet-101 and ResNet-50. We additionally achieve 28-78% speed-up compared to the native NNP-I compiler on all three workloads. | [
"Reinforcement Learning",
"Memory Mapping",
"Device Placement",
"Evolutionary Algorithms"
] | Accept (Poster) | https://openreview.net/pdf?id=-6vS_4Kfz0 | https://openreview.net/forum?id=-6vS_4Kfz0 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"KI1els_w84",
"xrGMPNaZ3iE",
"0dMM1sAcune",
"PQRjqcyQ2h",
"ckZy3Nu4wDw",
"AH4texsP3xW",
"ftoNpMG-2pw",
"7EpA3WMOSNi",
"g_FuxBPM3U",
"Pv8809vdKdU",
"OB0PdkdJdp",
"sRbFFceOyWg",
"OG0PshTN1wX",
"XlRlf2Pb_M",
"dCCxpALWJ4",
"CvfwPhx8Jky",
"g07Ws9sL6kr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040497703,
1605760207589,
1605757501285,
1605648282841,
1605589828362,
1605579871096,
1605577414909,
1605410969993,
1605297857618,
1605297802653,
1605297740843,
1605297639938,
1605297530970,
1603944333234,
1603907427382,
1603783926496,
1603718871735
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3688/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3688/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3688/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3688/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3688/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3688/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3688/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3688/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3688/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3688/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3688/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3688/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3688/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3688/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3688/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3688/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"Most of the reviewers agree that this paper presents interesting ideas for an important problem. The paper could be further improved by having a thorough discussion of related works (e.g. Placeto) and construct proxy baselines that reflect these approaches.\\n\\n\\nThe meta-reviewer decided to accept the paper given the positive aspects, and encourages the author to further improve the paper per review comments.\\n\\n\\nThank you for submitting the paper to ICLR.\"}",
"{\"title\": \"Comments\", \"comment\": \"Overall, I feel the revision seems fine to me. As a last comment, I would like to see the reference to the AutoTVM, Chameleon that addresses the code optimization side of the work as I mentioned in the original review. Overall, I am satisfied with the authors' response, hence increased my score from 6 to 7.\"}",
"{\"title\": \"Thanks | Agreed about more data points\", \"comment\": \"We agree more data points would make EGRL's claims even more concrete. Since we were limited by hardware constraints, we had to take a call on when we had sufficient data for a proof of concept and felt that the three representative large models were a reasonable data point.\\n\\nWe appreciate your revisiting the score!\"}",
"{\"title\": \"Comments\", \"comment\": \"I thank the authors for responding to my questions.\\n\\n*Contrast to REGAL that evaluates 372 workloads and Placeto that evaluates 96 synthetic graphs*: I acknowledge that validating results on real hardware is a big win for EGRL over [1] and [2], however, my comment was a critique on the amount of evidence provided that EGRL works on real hardware. As it stands, we are just two data points away from a sample size of one :) I understand the computational limitations of these kind of experiments, however, my judgment tells me that an excellent version of this paper would have evaluated the technique on many more deep learning models (GANs, RNNs, Graph Neural Networks, maybe even random neural networks). I stick to my stance that the performance of EGRL looks promising but should be taken with a grain of salt until more evidence is provided.\\n\\n*If it takes long to train and has poor generalization - why is it practical?*: Thanks for the clarification - if I understood correctly, the practicality is similar to that of [3] and [4].\\n\\n*Action space in REGAL [1] compared to EGRL* - Good point, while [1] does have some graphs with larger action spaces, they only provide the average performance of REGAL and it is hard to conclude if that translates to good performance on large graphs.\\n\\n*Baselines might be weak, Training time, Generalization properties, Related works, Visualization for BERT mappings* - Thank you for the clarification\\n\\nOverall, I am happy to bump my score to a 6.\\n\\n[1] Aditya Paliwal, Felix Gimeno, Vinod Nair, Yujia Li, Miles Lubin, Pushmeet Kohli, and Oriol Vinyals. Reinforced genetic algorithm learning for optimizing computation graphs. arXiv preprint arXiv:1905.02494, 2020\\n\\n[2] Ravichandra Addanki, Shaileshh Bojja Venkatakrishnan, Shreyan Gupta, Hongzi Mao, and Moham-mad Alizadeh. Placeto: Efficient progressive device placement optimization. In NIPS MachineLearning for Systems Workshop, 2018\\n\\n[3] Azalia Mirhoseini, Anna Goldie, Hieu Pham, Benoit Steiner, Quoc V. Le, and Jeff Dean. A hierarchical model for device placement. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Hkc-TeZ0W.\\n\\n[4] Azalia Mirhoseini, Hieu Pham, Quoc V. Le, Benoit Steiner, Rasmus Larsen, Yuefeng Zhou, NaveenKumar, Mohammad Norouzi, Samy Bengio, and Jeff Dean. Device placement optimization with reinforcement learning. arXiv preprint arXiv:1706.04972, 2017.\"}",
"{\"title\": \"Manuscript updated with discussion on data flows and relation to other ML-based code optimizations\", \"comment\": \"We thank you again for the engaging discussion.\\n\\nBased on your feedback, the updated manuscript now contains an extended discussion in Appendix G on the relevant architectural details of NNP-I and details on the memory hierarchy and trade-offs. We also added to it a subsection titled `EGRL's Interaction with NNP-I` where we go into detail about the exact level of control EGRL has on the data flow in the hardware. \\n\\nIn order to retain the original section numbering and 8-page limit while it is under review, we added this complete section to the Appendix. Since the final manuscript allows for 9 pages, we will move this section to the main paper in the camera-ready version.\\n\\nSpecifically, we clarify that while EGRL can be extended to control various hierarchy of the data flow, we choose to study the least restrictive formulation (i.e., requiring the least amount of domain knowledge) where downstream cache management and other optimizations are left to the compiler. \\n\\nWe added an additional paragraph on the complementarity of EGRL with other ML based graph optimization methods such as [5] but also other forms such as quantization and model compression which are generally not hardware-aware. \\n\\nWe hope these changes address the points you raised in this discussion.\"}",
"{\"title\": \"Comments\", \"comment\": \"Thank you for the detailed and thoughtful response.\\nIn general, I am satisfied with the authors' response, and I would be happy to vote strongly for acceptance **on the premise that the authors revise the manuscript so that it includes description of the work in the context of the dataflows and the other ML-based code optimizations.** I believe this work can be fully appreciated only when connections to other aforementioned aspects of compilation and optimization are provided together.\\n\\nThe work provides interesting insights and a good practical application of GNNs and its marriage with evolutionary algorithms. While there are some weaknesses in evaluation as the Reviewer 1 and 3 pointed out. However, I believe the insights and the gains provided in this paper outweighs such weaknesses.\"}",
"{\"title\": \"NNP-I architecture | EGRL's partial view of the problem | Discussion on graph re-writing [5]\", \"comment\": \"Thank you for your thoughtful response and for taking the time to clarify the definition of data flows.\\nThe updated manuscript now has an additional section on the architecture of NNP-I. Deeper details can also be found in [https://en.wikichip.org/wiki/intel/microarchitectures/spring_hill]. \\n\\nBroadly, NNP-I consists of 12 ICE (inference compute engine) cores each with a fast 4MB SRAM. All ICEs have access to a shared, slower LLC (24MB). Additionally, an even slower 32GB DRAM can also be accessed by an ICE core via the LLC. Thus, our agent needs to learn to trade off between memory capacity and bandwidth across these options. \\n\\nIn EGRL, the GNN **only** encodes the input workload explicitly. In this work, our RL agent, and other baselines, only specify if a layer should be placed in one of the three memory types. The choice of the specific ICE core (and therefore the specific Deep SRAM address) is left to further cache management logic in the compiler. Since EGRL trains itself on the resultant latency, one could argue that it learns to model the consequence of the additional compiler optimizations implicitly - but this is never modeled explicitly. \\n\\nThus, EGRL **does have a partial view of the problem** in the context of data flows as you describe it. \\n\\nHaving said that, it would be possible to apply EGRL on an action space larger than what we chose: E.g. To let it control the number of compute kernels (ICE) to use per thread or per op, scheduling between branches, and more. We believe that the greater the complexity of the chosen action space is, the greater the potential of optimization but also the convergence time of the algorithm and need for specific knowledge of the underlying architecture. These options are recorded for future work.\\n\\nWe cannot say conclusively whether EGRL would provide similar speedup regardless of the sequence of operations executed by the compiler. A more rigorous way to answer that question is to test EGRL on a wide range of hardware/compilers for a given computational graph. That is certainly something worth investigating in future work. \\n\\nThe question about the optimality of the baseline is fair. The work in [5] specifically aims to reduce wasteful memory communication resulting from \\u201cirregularly wired neural networks\\u201d that stem from \\u201crandom network generators\\u201d and \\u201cneural architecture search\\u201d. In our work, we considered three very commonly adopted networks (Resnet-50, Resnet-101 and BERT) - all of which could be generally considered optimally wired compared to a randomly discovered topology. Thus, we do not think that an approach like [5] would find significant memory bandwidth savings compared to standard compiler optimizations like layer fusion. \\n\\nYou raise another interesting question - if EGRL would provide the same gains on top of the optimizations stemming from [5]. The work in [5] performs hardware independent optimization of the network topology in a way that minimizes \\u201cwasteful\\u201d sequences -- essentially a form of graph rewriting. On the other hand, EGRL aims to optimize on a hardware specific performance metric while executing a computational graph on the target - but has no ability to modify the topology of a computational graph. Thus, the two are complementary approaches to accelerate the execution of such graphs. \\n\\nConsider the scenario where a computational graph optimized by [5] is to be executed on a target hardware. Depending on the application, the desired hardware performance metric could be very different - e.g., pure throughput (as we use in the paper) or throughput per watt. Additionally, depending on the hardware configuration, various compute and memory components may have very different trade-offs. Hardware specific constraints like these are not handled by [5] at all. EGRL will (in theory) learn to optimize the application specific performance metric under the hardware-specific constraints. An interesting future work could be hardware aware graph-writing - which could investigate if the graph re-writing could be jointly optimized with memory mapping for a given hardware target for neural architecture search.\"}",
"{\"title\": \"Questions\", \"comment\": \"I understand that a formal study across several hardware configurations is not feasible during the rebuttal period. However, I would like to see some reference to the details of the architecture for more informed evaluation of the paper.\\n\\nRegarding the answer about the dataflows, I think there may be a little mismatch between the definition of dataflows. The dataflows that I refered to is not limited to just the sequence of the operations. The dataflows I am referring to includes the mapping of the operations to the hardware which one incarnation would be tiling. I believe NNP-I would not have a computing array that computes the entire layer in a single step, but relies on some tiling which is done by the compiler. Without encoding these details to the GNNs, it seems to me that the current work only takes a partial view.\\n\\nThis dataflow issue also has to do with [3,4]. In my original review, I was looking for authors' view in relation to how the results presented in the paper would be affected by different dataflows, sequencing, etc. [3,4] shows that there is a significant variance in the inference speed of deep networks. Therefore, the overall gains from the proposed method can vary significantly depending on how optimized the paper's baseline is. I would like to see some analysis and evaluation with regard to this.\\n\\nFrom the authors' response, it seems that the authors are claiming that this method encodes the relative sequence through the GNN. Does this mean that the work would provide similar speedup regardless of the sequence of operations? For example, in [5] it is shown that the overall memory footprint is significantly affected by the sequence of the operations, and the paper presents an optimal sequence of operations. My question is whether this method would provide the same significant gains shown in the paper even after the \\\"wasteful\\\" memory communication from the suboptimal sequence of operations have been obviated by [5].\\n\\n[5] \\\"Ordering chaos: Memory-aware scheduling of irregularly wired neural networks for edge devices\\\", MLSys 2020\"}",
"{\"title\": \"Response 1 of 1\", \"comment\": \"Thank you for your review. We would like to address your main concerns below.\", \"nnp_i_architecture_and_configurations\": \"As suggested, we will add a section in the Appendix detailing the NNP-I architecture. While the NNP-I does allow for different configurations to change power consumption via frequency control, the results reported in the paper are for one configuration.The results in our paper, however, come from running experiments on different instances of the hardware. For those cases, while the absolute measured throughput varied, as expected, from chip-to-chip, the observed relative improvements over the baselines had little variability. \\n\\nWe defer a formal study across several hardware configurations to future work. \\n\\nRelative communication speeds of SRAM, LLC and DRAM:\\nThe memory size and bandwidth trade-offs between SRAM, LLC and DRAM are provided in section 3; LLC (24MB) and SRAM (4MB) are ~10x and 100x faster than DRAM (32GB) respectively.\", \"effect_of_dataflows_on_memory_communication\": \"We address this point while justifying our choice of a graph representation for the incoming workload. The edges in the graph input to the GNN agent captures the relative sequences of operations as each node in the input graph represents an operational layer. The advantage of the GNN representation is that this raw graph representation is then mapped to lower dimensional GNN embeddings - specifically the hidden layers of the Graph UNet. These GNN features, in theory, should be invariant representations of the data-flow - including the order of operations. \\n\\nIn contrast a sequential mapping strategy necessarily has to prioritize some nodes over others (typically by mapping the first node and working towards the end of the incoming workload, though this is absolutely not necessary). In practice though we must map all nodes in order to even achieve a valid mapping and there is no notion of order in the final complete mapping. EGRL simply maps all nodes simultaneously (in one step).\\n\\nWe do not quantify this directly - however, we note that such invariant feature representations are commonly noted in hidden layers of deep neural networks in general. Thus, we feel that the GNN approach actually does capture the full view of the problem - as opposed to sequential mapping strategies which have a partial view of the problem for a given iteration. While our overall speedup metric provides an aggregate view of memory communication for the entire workload, a rigorous study investigating this at the node level is an interesting angle for future work.\\n\\nComparison to [3] and [4]:\\nWe will add these references to the manuscript. \\nIn AutoTVM [3] and Chameleon [4], the authors investigate the automatic optimization of tensor operations for arbitrary hardware targets. AutoTVM builds a statistical model to estimate the cost of each low-level program. The specific implementations deploy gradient-boosted trees and TreeGRU - the latter being a deep-learning based method. Chameleon takes an RL based approach. Although these cannot be directly mapped to a \\u201cdevice mapping\\u201d strategy, they can be considered similar in nature to the problem of optimizing in a large combinatorial space - similar to our paper - and either method could potentially be applied to the device mapping problem with appropriate adjustments for the state space. \\n\\nChameleon is obviously closer to our work - and uses a purely actor-critic network. However, here too, the \\u201cleverage a clustering algorithm to find configurations that are representative of each cluster\\u201d. This is consistent with other pure RL based prior-work that rely on clustering algorithms. We could not glean from the paper the magnitude of the combinatorics problems they solve - however, they report their results on AlexNet, VGG-16 and Resnet-18 which are all significantly smaller workloads compared to the ones in our work or even other related works.\"}",
"{\"title\": \"Response 1 of 1\", \"comment\": \"Thank you for your review. We would like to address your main concerns below.\\n\\n**Need for the use of a hybrid approach:**\\nThe use of RL and EA is motivated by several works in recent literature that have demonstrated the effectiveness of combining reinforcement learning with search for extremely large combinatorial problems. \\n\\nFor example, AlphaGo and AlphaZero successfully combined RL and look-ahead search (MCTS) to find effective strategies on board games. CERL showed similar gains in large action spaces for continuous control by combining RL and EA search. We adopted the EA search due to ease of implementation - but the broader design could also have utilized tree search as well as other search methods.\", \"sparse_rewards_are_a_common_problem_in_rl_and_our__case_is_no_different\": \"A sparse reward obtained once for all nodes renders it difficult for RL to learn in isolation. This is alleviated by EA, as it relies only on episodic performance. On the other hand, EA converges extremely slowly compared to RL. EGRL provides a framework for the slower EA component to anchor its search around partial solutions provided by the RL component. This speeds up EA while retaining its ability to find stable and high performance solutions. This also avoids the problem of reward shaping, which is commonly used in sparse reward problems. We integrate the sparse episodic feedback natively into the reward.\\n\\nFurther, the population based approach in EA allows for significant asynchronous parallelizability since all policies can roll-out in the environment completely independently of each other. Since the rollouts dominate the compute budget, convergence time for EGRL decreases with a larger number of hardware instances available. \\n\\nThe importance of the RL component to improve on top of EA is shown in the ablation studies in Fig 3 where we see that the combined EGRL formulation consistently outperforms the RL and EA components in isolation. This quantifies our assertion that both EA and PG are essential components of EGRL.\\n\\n**Sample complexity:**\\nThe number of iterations reported is the same as the sample complexity in this paper. We have updated the text in the revised manuscript to make this clearer.\"}",
"{\"title\": \"Response 1 of 1\", \"comment\": \"Thank you for your review. We would like to address your main concerns below.\\n\\n**Sizes of the action space in prior work:**\\nThe reference in our paper should have been to Mirhosseini et al 2017 (Device Placement) and not Mirhosseini et al 2020 (Chip Placement). We have corrected this error in the updated manuscript. \\n\\nAs discussed in our paper, a number of prior works deploy heuristic pre-processing steps to reduce the action space for the learning problem. Since it is difficult to compare heuristics across different applications, we focus primarily on the complexity of the learning problem. We feel this is a fair approach since EGRL does not rely on any algorithmic heuristics and requires only basic domain knowledge e.g., the state and action space dimensions to initialize the policy networks and the task objective to optimize. \\n\\nIn the 2017 paper (our intended reference), Table 1 shows the workloads considered. The largest number of heuristically derived groups is 280 corresponding to NMT. The maximum number of devices considered for each group of operations was 5 \\u2192 thus leading to the maximum combinatorics space of 5^280 or \\u301c 10^196 as discussed in our paper.\\n\\nIn the 2020 paper (the incorrect reference), as you note, the overall combinatorics is extremely large. However, the problem for the reinforcement learning agent in that work is significantly smaller than that. For example, their paper notes that the action space is all valid placement of one macro. Following the RL-based macro placements, the standard cells are placed using classical force-directed methods rather than any learnt policy. \\n\\nThus if we focus on the RL problem, the action space is all possible placement choices for one macro - which in their case are the centers of \\u201ca few thousand grid cells\\u201d. Since they do not take a joint action across all macros - placing macros one at a time (Figure 1a) - their actual action space is of the order of \\u201ca few thousand\\u201d. They also do not report the total number of macros placed - thus making it difficult to ascertain the overall scale of the combinatorics problem. \\n\\nIn Section 2, we provide some trade-offs between sequential placements and simultaneous placements. We will update it to discuss the action space comparison with this paper in the final manuscript.\\n\\n**Comparisons with HDP, Regal, Placeto etc.:**\\nWe appreciate that you recognize that direct comparisons with HDP, REGAL, Placeto etc. are difficult due to the lack of official open-source code. Additionally, each of these works rely on domain specific heuristic pre-processing steps - which makes it difficult to directly compare on different problem domains. We therefore chose to compare against the state-of-the-art RL baseline (SAC), since policy-gradient based RL is generally a common choice for the learning portion of these works. \\n\\n**Nits**: Thank you for pointing these out. We have fixed these typos in the updated manuscript.\\n\\nWe hope the above clears up the confusion around action spaces in prior work. Since you score our paper marginally below the acceptance threshold, we would greatly appreciate your feedback if you feel other aspects of the paper require more detail.\"}",
"{\"title\": \"Response 1 of 2\", \"comment\": \"Thank you for your detailed and insightful review. We would like to address your main concerns below.\\n\\n**Contrast to REGAL that evaluates 372 workloads and Placeto that evaluates 96 synthetic graphs:**\\nWe contrast our work with REGAL [1] in the paper - which assumes zero latency and infinite bandwidth that is not practical on actual hardware. Specifically, they mention \\u201cWe consider two tasks, one is minimizing peak memory and the other is minimizing running time, both on two homogeneous devices with 16 GiB of memory each and synchronous tensor transfers with zero cost (zero latency and infinite bandwidth).\\u201d \\n\\nWe also address comparisons with Placeto [2] in our related works section and note the significant amount of heuristic, domain specific grouping utilized as pre-processing steps. In contrast EGRL assumes no domain knowledge and directly handles the entire combinatorial action space. Placeto, too, trains on a simulator and not on physical hardware. Specifically, they mention \\u201cSince it can take a long time to execute placements on real hardware and measure the elapsed time, we built a reliable simulator that can quickly predict the runtime of any given placement for a given device configuration.\\u201d\\n\\nThe key differentiating point of EGRL is that our pipeline is entirely trained and validated on a hardware accelerator end-to-end. With such real-world constraints, training time for each policy is gated by real bandwidths and memory constraints. In fact, the key point of EGRL is to learn the trade-off between the relative memory capacity and bandwidths of the memory components on the chip. Thus our solution requires no further sim2real steps to transfer to actual hardware - whereas both of these approaches would need to be validated on hardware. Neither of these papers address if their solution does indeed transfer to real hardware. \\n\\n**Baselines might be weak:**\\nEGRL uses a population based approach with ranking. So even at the zero-th iteration, there are a large number of candidate solutions of which we report the champion performance. Further, the mapping logic in the compiler comprises standard heuristics like cache eviction strategies that are common across processor architectures. We also compare against more well understood baselines like dynamic programming and pure policy gradient RL. \\n\\nOur choice of baselines was primarily motivated by prior work in this area. For example, Mirhosseini 2017, 2018, 2020, Placeto, REGAL, etc do not provide official source code. Further, each of these prior works have domain-specific, heuristic sub-modules that are not guaranteed to work in a different domain. However, all of these papers have a core RL component - and thus we chose to benchmark using SAC - which is a state of the art RL algorithm - as one of the baselines. We contend that this choice distils the core RL aspect across all of these prior works. \\n\\n**Training time:**\\nOur policies train in order of hours if restricted to a single hardware node. The main cost of each iteration is the rollout step - which is essentially a forward inference pass using the workload. The update cost for the policy is negligible in comparison. When trained on multiple hardware nodes, we can take advantage of the parallelization inherent in EGRL and training time reduces roughly linearly with an increasing number of nodes. \\n\\n**Generalization properties:**\\nWe completely agree that the generalization experiments cannot be used to conclude that an EGRL policy can fully transfer from one workload to another. In fact, we concede this in the paper. The scope of our generalization experiments is to investigate if the learnt GNN embedding are performant at all when transferred zero-shot. We found that zero shot transferred policies can be somewhat performant - although they are definitely sub-optimal compared to training from scratch. \\n\\nIn Fig 4, the policy trained on BERT transferred to ResNet-101 and achieved a ~25% speedup. As you note, it did not achieve any speed-up when transferred to ResNet-50. The policy trained on ResNet-50 transferred to ResNet-101 and achieved ~50% speedup. It also transferred to BERT and achieved ~25% speedup. As we acknowledge in the paper, further fine-tuning is required to bridge the performance gap - however, the fine-tuning should consume fewer samples than training from scratch. We defer a full study of the generalizability properties to future work.\"}",
"{\"title\": \"Response 2 of 2\", \"comment\": \"**\\\"If it takes long to train and has poor generalization - why is it practical?\\\":**\\nEGRL is practical because it allows a designer to optimize run-time on hardware with minimal understanding of compiler operations or memory hierarchy.\\n\\nWe respectfully disagree with the notion that it is not practical because \\u201cit takes long to train\\u201d. Prior works with faster training times noted above do not train on hardware directly. As discussed, some of them do not model true hardware constraint - and thus might need additional fine-tuning on hardware for peak performance. This is unknown as those works do not map their solutions to hardware. \\n\\nAlso, as noted, training time for EGRL scales ~linearly with parallel nodes. This could be particularly useful when optimizing on large clusters.\\n\\n**Related works:**\\nAll of these related works are discussed in the paper and we do note that these aim to \\u201coptimize the execution of computation graphs on hardware\\u201d. Based on your suggestion, we have added additional verbiage noting the differences in the problem domains.\\n\\n**Action space in REGAL [1] compared to EGRL:**\\nRegal trains and tests on a dataset of graphs with varying numbers of nodes. In their Fig 5, they show the distribution of the number of nodes in a dataset. Their publicly released \\u201csynthetic dataset\\u201d comprises graphs with at most 200 nodes - thus making the action space 2^200 or ~10^60. This is much smaller than our action space which goes up to 10^358. The equivalent graphs for them would have to be ~1200 nodes. \\n\\nTheir TF memory graphs dataset does go up to 2000 nodes. However, the distribution of the number of nodes is centered around 100 with the 1200+ nodes falling on the tail of the distribution. Thus the majority of their graphs are several orders of magnitude smaller than what we consider. \\n\\nPertinently, they only report mean performance for their evaluations in Fig 2 and Fig 3 on a test set that samples graphs from this distribution. From this, it is difficult to conclude if their reported mean performances would actually scale to 2000 nodes. They show performance variance in the Appendix - but do not isolate the performance for 1200+ nodes - which would be roughly equivalent to our action space. \\n\\nHaving said that, our future work will investigate scaling EGRL to extremely large graphs with ~1000+ nodes. So far, for 350+ nodes, we haven\\u2019t observed any noticeable performance degradation with size.\\n\\n**Visualization for BERT mappings:**\\nWe left the BERT mapping out due to constraints on space. We see similar mapping differences in BERT as for the ResNet mappings. We have added this to the appendix in the revised manuscript - and will update it in the main paper in the camera-ready version\"}",
"{\"title\": \"Well-written paper on memory mapping method that outperforms native NNP-I compiler by 28-78%\", \"review\": \"The paper describes a machine learning method for mapping computational nodes in a neural network onto different levels of the memory hierarchy (DRAM, LLC, and SRAM) to minimize latency of inference. The proposed approach builds on CERL, combining policy gradient, reinforcement learning, and graph neural network, achieving 28-78% speed-up over the native NNP-I compiler on vision and language benchmarks (ResNet-50, ResNet-101, and BERT).\\n\\nOverall, the paper was well-written, targets an impactful problem, and the reported improvements (28-78% over native compiler) are impressive.\\n\\nIn the related work section, I did have a concern, as the authors state \\u201cFor example, previous work with manual grouping (sic) operate at most in 5^280 \\\\~= 10^196 dimensional action space (Mirhoseini et al., 2020), compared to 10~ 20^358 for our BERT problem\\u201d. However, Mirhoseini et al., 2020 (\\u201cChip placement with deep reinforcement learning\\u201d) places \\u201ca few thousand clusters\\u201d (>=2000 nodes) onto a grid with \\u201can average of 30 rows and columns\\u201d (~900 cells), so wouldn\\u2019t the action space be at least 900^2000? Also, didn\\u2019t that work use a heuristic grouper (hMETIS), but maybe that\\u2019s close enough to \\u201cmanual\\u201d? \\n\\nThe authors only look at three benchmarks, but they were well-chosen (two representative vision models and one large language model). It\\u2019s also good that they compare against PG and EA alone as a form of ablation, given that their method is effectively a combination of these two. It would have been better if they also had compared with prior state-of-the-art (e.g. HDP, REGAL, Placeto, or (the unmentioned) GDP / GO), but it is somewhat understandable given that their code does not seem to be open-sourced. \\n\\nI liked that the authors report mean and standard deviation for the five runs, and measured \\u201ctrue\\u201d reward by running on hardware. I also thought they did a good job motivating their method (aside from the questionable statements about action spaces in prior work), and of analyzing and visualizing its performance.\", \"nits\": \"In the Method section, \\u201cthe compiler rectifies them and outputs a modified map, M_c, that is fully executable (Line 6).\\u201d It would probably be good to add \\u201cin Algorithm 1\\u201d so as not to confused the reader.\\n\\n\\u201cIt comprises of a single PG learner\\u201d -> \\u201cIt is comprised of\\u2026\\u201d\\n\\n\\u201cBoth methods are known to produce highly performant and stable solutions but are also significantly slow compared to Deep RL\\u201d (\\u201csignificantly slower than\\u201d?)\\n\\n\\u201cWhile the transferred policies are clearly underperform those from scratch\\u201d -> \\u201cunderperforming\\u201d\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting extension of the Graph Optimization using DRL line of work\", \"review\": \"Optimizing the execution of deep neural networks has tremendous impact on the cost and performance in many industries due to the proliferation of \\\"Deep Learning\\\". There has recently been an interesting line of work of using learning to optimize policies related to placement and scheduling of the neural network computation graph outperforming tediously hand-crafted heuristics. The proposed paper would be a nice extension along this line.\\nThe impact of memory placement for DNN has been clearly motivated and is easy to appreciate.\\nThe paper overall is clear and easy to follow. The methodology is sound and justified. Experiments and results make a compelling case in supporting the claims.\\n\\nI think one aspect that is insufficiently motivated is the need to use a hybrid approach between RL and evolutionary algorithms. Results show improvement in performance but it is not clear to me why. Perhaps this is addressed in the CERL paper which I am not familiar with.\\n\\nIn the results section, it would be nice to see the sample complexity of each of the methods. I see that the number of iterations are shown in Figure 3 but it is not clear to me if that also corresponds to the number of samples consumed by each of the approaches before they converge.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #2\", \"review\": \"The paper proposes Evolutionary Graph Reinforcement Learning to solve the memory placement problem. Main ideas are using GNN as the network architecture for reinforcement learning agents that look for more informed priors for evolutionary algorithms. Overall novelty of the paper comes from the neat combination of RL, EA, and GNN, and applying it to memory placement (ML for Systems).\\n\\nThe paper indeed tackles an important problem that can affect the overall performance and efficiency of the hardware. I believe the reorganization of various off-the-shelf ML techniques to solve real problems in the systems domain marks a large contribution, hence the positive overall rating.\\n\\nOne of the main drawbacks of the paper is that the paper only tests on a single type/configuration of hardware. While this is fine to some extent, this makes it hard to get confirmation about the generality of the overall method considering the large variance of the speedup.\\n\\nAnother related question comes from how this work relates to the optimizations of the dataflows [1,2]. As it is difficult to evaluate the overall memory communication without considering the order of operations, etc. the work in turn neglects the big question and focuses on only the partial view of the problem. It would provide a nice reference point if some of these points are discussed in the paper.\\n\\nLast question comes from the baselines. While the previous works on tensor optimizations [3,4] are very closely related and many of the ideas provide a good comparison point, these have not been discussed nor cited. For example, I guess AutoTVM's way of approximating the search space using TreeGRU or XGBoost can help. Also, Chameleon's way of sampling the examples using adaptive sampling may provide an interesting reference point in terms of reduction of number of samples.\\n\\nOverall, I have enjoyed reading the paper and I find the ideas in the paper interesting. I am currently weakly pro for the paper, and look forward to the authors' response :)\\n\\nQuestions\\n1. Could you provide and overview of the NNP-I's architecture in the appendix? Also, possibly ablation studies over different configurations of the hardware.\\n2. What are the relative communication speed of SRAM, LLC, and DRAM? \\n3. It is well discussed in the computer architecture community that the memory communication is very much determined by the dataflows of the architecture. How are the results affected by these dataflows?\\n4. How does the work compare to the methods described in [3,4]?\\n\\n[1] \\\"Eyeriss: A spatial architecture for energy-efficient dataflow for convolutional neural networks\\\", ISCA 2016\\n\\n[2] \\\"Interstellar: Using Halide's Scheduling Language to Analyze DNN Accelerators\\\", ASPLOS 2020\\n\\n[3] \\\"Learning to optimize tensor programs\\\", NeurIPS 2018\\n\\n[4] \\\"Chameleon: Adaptive code optimization for expedited deep neural network compilation\\\", ICLR 2020\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"Summary:\\n\\nThis paper proposes a new algorithm called EGRL to improve computation graph running time by optimizing placement of the graph's components on memory. Specifically, the authors demonstrate the algorithm on the Intel Neural Networks Processor for Inference (NNP-I), which allows them to map neural network components on one of three memory hierarchies, each with different tradeoffs. The authors demonstrate that this technique provides speedups on BERT, Resnet-50 and Resnet-101.\", \"pros\": [\"Some past papers (for eg. [1]) in this domain evaluate their work in simulators instead of real hardware, and often, the simulators make assumptions that are not realisitc. The paper tests its technique on actual hardware, and this is definitely a plus.\", \"The authors promise that they will open-source their code. This is important since many of the efforts in this domain remain fragemented and difficult to reproduce. This is primarily due to the lack of open source code or a standard benchmark.\", \"The paper is well written.\", \"The visualisations of the learned policy vs the baseline in Figure 6 are quite good.\", \"EGRL directly builds upon CERL so it is not very novel, but it has not been applied before to this domain.\"], \"cons\": [\"The paper evaluates the technique on just 3 workloads. This is in contrast to [1] who evaluate on 372 different workloads and [2] who evaluate on 96 synthetic graphs. [3] and [4] also evaluate on a very small number of workloads, but I believe they probably got a freepass since they were the earliest works in their domain.\", \"The baseline that the experiments are being compared against might be weak - in figure 3, it looks the policy from both EGRL and EA in iteration 0 itself beat the baseline for Resnet-101 and BERT!\", \"How long does it take to perform one iteration? And how long does it take to train the policy? This would be useful to get an idea of how EGRL fairs against [3] and [4] which also trained on real hardware and took many hours to finish training the policy.\", \"The demonstration of generalizability is insufficient - it is difficult to conclude that EGRL can generalize to other workloads. For eg. in Figure 4 (left), the policy performs worse than the baseline for Resnet-50. Moreover, two of the workloads are from the Resnet family.\", \"If it takes a long time to train each policy and if the model also shows poor zero-shot generalizability, it makes me question if this approach is practical for a compiler setting where a user would typically want the compilation to be completed quickly.\"], \"overall\": \"I felt that the paper has some interesting ideas but needs more experiments.\", \"questions_and_clarifications\": [\"I believe that the related work section should add a clarification - [1], [2], [3] and [4] primarily deal with device placement, i.e., placing components of computation graph on different CPUs/GPUs to optimize run time via better parallelization. While this work is concerned with mapping components to different memory hierarchies on the same device.\", \"While EGRL's action space is larger than [5], the action space in [1] is much larger - for a graph with 2000 nodes to be placed on 2 devices, there are 2^2000 possible choices ~ 10^603\", \"In Figure 6 (Bottom), is there any reason why you didn't show the result for BERT?\"], \"references\": \"[1] Aditya Paliwal, Felix Gimeno, Vinod Nair, Yujia Li, Miles Lubin, Pushmeet Kohli, and Oriol Vinyals. Reinforced genetic algorithm learning for optimizing computation graphs. arXiv preprint arXiv:1905.02494, 2020\\n\\n[2] Ravichandra Addanki, Shaileshh Bojja Venkatakrishnan, Shreyan Gupta, Hongzi Mao, and Moham-mad Alizadeh. Placeto: Efficient progressive device placement optimization. In NIPS MachineLearning for Systems Workshop, 2018\\n\\n[3] Azalia Mirhoseini, Anna Goldie, Hieu Pham, Benoit Steiner, Quoc V. Le, and Jeff Dean. A hierarchical model for device placement. In International Conference on Learning Representations, 2018. URL https://openreview.net/forum?id=Hkc-TeZ0W.\\n\\n[4] Azalia Mirhoseini, Hieu Pham, Quoc V. Le, Benoit Steiner, Rasmus Larsen, Yuefeng Zhou, NaveenKumar, Mohammad Norouzi, Samy Bengio, and Jeff Dean. Device placement optimization with reinforcement learning. arXiv preprint arXiv:1706.04972, 2017.\\n\\n[5] Azalia Mirhoseini, Anna Goldie, Mustafa Yazgan, Joe Jiang, Ebrahim Songhori, Shen Wang, Young-Joon Lee, Eric Johnson, Omkar Pathak, Sungmin Bae, Azade Nazi, Jiwoo Pak, Andy Tong, KavyaSrinivasa, William Hang, Emre Tuncer, Anand Babu, Quoc V. Le, James Laudon, Richard Ho,Roger Carpenter, and Jeff Dean. Chip placement with deep reinforcement learning. arXiv preprint arXiv:2004.10746, 2020.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
YhhEarKSli9 | AutoBayes: Automated Bayesian Graph Exploration for Nuisance-Robust Inference | [
"Andac Demir",
"Toshiaki Koike-Akino",
"Ye Wang",
"Deniz Erdogmus"
] | Learning data representations that capture task-related features, but are invariant to nuisance variations remains a key challenge in machine learning. We introduce an automated Bayesian inference framework, called AutoBayes, that explores different graphical models linking classifier, encoder, decoder, estimator and adversarial network blocks to optimize nuisance-invariant machine learning pipelines. AutoBayes also enables learning disentangled representations, where the latent variable is split into multiple pieces to impose various relationships with the nuisance variation and task labels. We benchmark the framework on several public datasets, and provide analysis of its capability for subject-transfer learning with/without variational modeling and adversarial training. We demonstrate a significant performance improvement with ensemble learning across explored graphical models. | [
"autobayes",
"bayesian graph exploration",
"inference autobayes",
"inference",
"data representations",
"capture",
"features",
"invariant",
"variations",
"key challenge"
] | Reject | https://openreview.net/pdf?id=YhhEarKSli9 | https://openreview.net/forum?id=YhhEarKSli9 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"iMY2l_UXcIO",
"DpV62B3M15D",
"bTS0BtQ4EJq",
"AJIj1H3HUvZ",
"bcpXDs3IH6",
"DXaRFaZHsJi",
"L5EWXWSFN63",
"ecOgK7MiCg",
"Y6OJYJ9_pJp"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040363822,
1606262800561,
1606230485608,
1606219601983,
1606174183622,
1603915247136,
1603848022252,
1603765762710,
1603754056871
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3687/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3687/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3687/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3687/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3687/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3687/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3687/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3687/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This work proposes a method for identifying appropriate graphical models through enumeration, pruning of redundant dependencies, and neural network conditionals. While structure learning is an interesting application and there are some promising results, there were a number of concerns around experimental evaluation, computational complexity of the method, clarity of the presentation, and connections to prior work. In particular, R1's concerns around the large field of structure learning in Bayesian Networks, and unwillingness to use the established terminology (and comparing to methods there) was not sufficiently addressed in the rebuttal.\"}",
"{\"title\": \"Authors' Response to AnonReviewer1\", \"comment\": \"We appreciate you for your careful reviews and suggestions. We are happy that you found our paper interesting and convincing. We made a major revision to improve the paper. Although we admit that many future works remain to rigorously demonstrate the usefulness of our idea, we believe that our proposed framework has some important insides and contributions for the research community. We hope you would reconsider assessing higher rating for the revised paper. Detailed responses are summarized below:\\n1. On related work: We thank you for the important comment. Accordingly, we added additional literature on Bayesian network and structure learning. We certainly admit that some terminologies (graph model etc.) are not well defined. As pointed out, our Bayesian graph should be identical to Bayesian network. Considering the fact that Bayesian network is also confusing as it may refer some specific deep neural network instances having Bayesian inference rather than mathematical concepts, we decided to keep using Bayesian graph. Instead, we added a sentence clarifying that our Bayesian graph is same as Bayesian network. There is another reason why we still use undefined graph concepts; besides Bayesian network (joint probability factoring), we also explicitly represent inference strategy in graphical model as in Fig. 5 (for conditional probability factoring) as there any non-unique multiple strategies. We believe it is not a major issue.\\n2. On novelty: As you described, our idea has a solid set of novelties over existing work on Bayesian network and structure learning. Besides the novelties you listed, one of major contributions includes the fact that AutoBayes can reasonably involve adversarial censoring for latent variables which is independent of another factor in a systematic way. In addition, we believe that the introduction of ensemble stacking in AutoML framework is novel and advantageous as most hyperparameter design methods throw away any weaker base models explored. Our results of ensemble AutoBayes showed a significant gain empirically.\\n3. On complexity: We completely agree with you that the near-exhaustive search of our AutoBayes algorithm has a complexity issue when the number of nodes is large. Nevertheless, we believe that the required number of nodes to model realistic datasets is limited. Please refer our response 1 to Reviewer2. In addition to the discussion of node size, we added discussion and new experimental results to show the trade-off between accuracy and complexity; specifically, Fig. 9(b) and Fig. 11. We empirically showed that our AutoBayes can outperform individual models when comparing at the same complexity. We believe those revisions made our paper improved a lot.\\n4. We added more clarification of what is nuisance. And, we added a new result in Fig. 9(a) and Fig. 10 to show the benefit of AutoBayes for minimizing nuisance variabilities.\\n5. We removed redundant item from contribution list accordingly.\\n6. Under 9-page limitation, we tried to correct and include as many information required in the main body, moving from appendix.\\n7. We added citation of ELBO concept.\\n8. We added discussion of structure learning and Bayesian information criterion.\\n\\nPlease do not hesitate to post further comments or questions.\"}",
"{\"title\": \"Authors' Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for reviewing our paper with detailed suggestions. We are glad to hear that the reviewer found our paper interesting and encouraging. Although there remain unsolved challenges to be dealt with in the future work, we believe that our paper provides sufficiently novel and useful ideas for the research community. Please find our responses below for respective queries:\\n1. On style: We made more diligent proofreading, revised for any grammatical errors and word choice, and provided additional explanations about shading in the caption of Fig. 3, thick circles and different line colors in the captions of Figs. 5 and 6. Because other reviewers found that the paper was well written and organized, we focused on revisions to add more discussions for better flows rather than a drastic change of the paper organization. We made major revision to improve readability under a hard limitation of 9-page length. Nevertheless, we admit that the revised paper still requires some preliminary knowledge for reading comprehensively. We hope our revision made it sufficient for a wider range of readers.\\n2. On context: We admit that we should have explained better the context of nuisance robustness. In response to Reviewer3, we clarified the concept of nuisance robustness in Section 1. One of the major hurdles in analyzing physiological datasets is the change in data distributions across different subjects and their biological state during recording sessions. There is a vast amount of research to discover subject-invariant features relevant for task classification to obviate the need for frequent calibration required in human-machine interfaces. AutoBayes is an AutoML framework targeted to provide robustness against nuisance variables, such as subject identities and recording sessions in the classification of physiological datasets. The nuisance-robust machine learning is also useful for various other applications. For example of image recognition, some factors such as image sensor specs, ambient environmental conditions, photographer's identity may indirectly affect the performance. For speech recognitions, many nuisance factors such as speaker's attributes, microphone's condition, recoding space etc. may change the classification accuracy. Those inherent nuisance factors are modeled with a random variable $S$ in Bayesian graph. We further added new experimental results in Figs. 9(a) and 10 to explain the nuisance robustness. Please refer our response 1 to Reviewer 4. \\n3. On time/space complexity: We agree with the reviewer that time and space complexity analysis would be a necessary part of discussion in an automated machine learning research. In response to Reviewer 3, we expanded the discussion -- see Fig. 9(b) --, where we added a detailed comparison of the accuracy of different methods vs. the number of trainable parameters. The results well supported that the AutoBayes can outperform individual model in Pareto sense for a fear comparison at the same complexity although it is difficult to achieve lower complexity. In addition, we added measurement results of wall-clock runtime in Table 4. We also evaluated the time complexity for different network size in Figs. 11(a) and (b), which show the task classification accuracy as a function of computation time for training and testing, respectively, for the Stress dataset. We accordingly added discussion for those new results in the revised paper. Please refer our responses 1 and 3 to Reviewer 2, response 2 to Reviewer 4, and response 3 to Reviewer 1.\\n4. On baselines and datasets: We added more detailed description of related works in Sections 2 and A.1. We added more detailed description of datasets in Section A.6. Due to the page limitation, we decided to keep concise descriptions in the main body and to move detailed descriptions in appendix.\\n\\nPlease do not hesitate to post further comments or questions. As our paper was greatly improved with additional explanations, discussions and results, we hope you would re-assess our paper with higher rating.\"}",
"{\"title\": \"Authors' Response to AnonReviewer4\", \"comment\": \"We thank the reviewer for the effort spent in reviewing our paper, and for the detailed suggestions. We are happy that you found our paper interesting and well written. Although our paper is still at the proof-of-concept stage, we believe that the concept has some useful and novel insights for the community. Reflecting your valuable suggestions, we made a major revision to improve the paper. As we specifically added more experimental results and discussion to resolve your concerns, we hope you would reconsider the rating. Please find our responses below for respective queries:\\n1. On nuisance robustness: Thank you for the very important comment. We added new performance figures to facilitate the discussion of subject robustness, according to your suggestion. One of these figures shows accuracy vs. model with box-whisker plots in Fig. 9(a), where the worst/best,1st/median/3rd quartiles are present to show the dispersion of per-subject accuracies. The other figure shows accuracy vs. subject with box plots in Fig. 10, where the distribution is determined by per-model accuracies. Fig. 10 clearly shows that the performance highly depends on the subject ID. From Fig. 9(a), it can be verified that the standard classifier model A (with no consideration of nuisance $S$) is not robust against the subject variation; more specifically, the best-case user may achieve 96% accuracy whereas the worst-case user has a poor performance of 82%. The subject robustness was significantly improved by AutoBayes which explores nuisance-robust models B to Kz including A-CVAE; specifically, the worst-case user performance was improved to 94% with the explored model Fz. Besides, ensembling multiple graphical models provides non-dispersive distribution of the accuracies across different users (nuisance factors). This is an empirical evidence that the AutoBayes method is advantageous to improve the robustness against nuisance. The standard classifier, model A, has two pitfalls. On the one hand, it has limited versatility to capture task relevant features from datasets with highly dependent on nuisance factors (There is strong empirical evidence proving this point in Fig. 2. AutoBayes models perform much better on physiological datasets, such as RSVP, MI, ErrP, Faces and ASL, where inference is highly dependent on the biological state of each subject). On the other hand, the accuracy of standard classifier does not exploit much of the available information -- nuisance factors $S$ -- and therefore is unstable across different subjects. AutoBayes exploits adversarial censoring to suppress nuisance information $S$ to generate subject-invariance latent variables, and therefore has less variations across subjects as demonstrated in Fig. 9(a). We believe those additional results and discussion improved the manuscript significantly.\\n2. On hyperparameter search: Your comments are valuable. It is partly related to the comments of Reviewer2, and please refer our responses therein. In order to resolve your concern, we added a detailed comparison of the graphical models in different hyperparameter configurations (different number of hidden layers and number of nodes in each layer). From the new figure showing the trade-off between model size and accuracy in Fig. 9(b), it is evident that AutoBayes can still outperform the standard classifier model A and A-VAE model B at the same complexity for fairness comparisons. There is strong empirical evidence that exploration of various inference strategies that best fit each generative model offers significantly more contribution to model accuracy than increasing the depth of the networks of a single graphical model. Additionally, Table 4 also presents model B is sub-optimal in ASL dataset, performing only with $37.80$% accuracy. Hence, exploration of neural network topology is vastly useful as we never know which model works best and how much nuisance variabilities is inherent, given new datasets. There is also strong empirical evidence that AutoBayes models perform much better on physiological datasets, such as RSVP, MI, ErrP, Faces and ASL, where inference is highly dependent on the biological state of each subject. Leveraging the simplest models A or B on datasets that do not present high nuisance variabilities is a valid premise.\\n3. On scalability: The AutoBayes framework can be extensible to multiple latent factors. We have experimentally demonstrated it by considering zero latent models (models A,C), one latent models (models B, D-I), and two latent models (models J,K). Two-latent models perform relatively well for some datasets, while it is not always best. Of course, we could consider more nodes, but the search space will rapidly grow with the number of nodes. Please refer our responses for Reviewer2 for this scalability issues.\\n4. We made further modifications accordingly.\\n\\nPlease do not hesitate to post further comments or questions.\"}",
"{\"title\": \"Authors' Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for the time and effort spent in reviewing our paper, and for the detailed suggestions. Your comments are all excellent in summarizing the overview and strength as well as weakness of our paper. Reflecting your valuable comments, we made a major revision adding further experimental results and discussion. We believe that our paper was significantly improved thanks to your comments. We hope you would find the usefulness of AutoBayes framework, and consider higher rating. Please find our responses below for respective queries:\\n1. On scalability: We completely agree with you that the current AutoBayes proposal has a scalability issue as it requires near-exhaustive search of models whose search space rapidly increases with the number of nodes in the graphical model. However, we would like to argue that factorizing macro nodes into a large number of micro nodes is not always useful or necessary in real-world datasets. We believe that macro-level network exploration considering only a few vertices in the Bayesian graph model is sufficient for most cases. For example, image classification tasks may have many inherent nuisance factors such as ambient light conditions, photographers' skills, camera specs, etc., and those can be represented by multiple random variables $S_1, S_2, \\\\ldots$, instead of a single joint nuisance variable of $S$. Nevertheless, we typically have no sufficient knowledge of minor or non-dominant nuisance variations for data analysis in reality. Hence, considering only a few nuisance factors should be reasonable and realistic. Our target is to optimize subject-invariant machine learning pipelines for human-machine systems to analyze physiological signals, where we do not expect any large number of nuisance factors, other than subject identity, session number, or task conditions. We can of course model multiple factors into a single joint nuisance factor if desired. Besides nuisance nodes $S$, the scalability issue will arise when we split the latent node $Z$ into many factors $Z_1, Z_2, \\\\ldots$. We also argue that imposing many inhomogeneous stochastic latent variables will not always be beneficial. We believe that only a few latent factors is still sufficient in practice (to explicitly represent different features). In our paper, we actually considered to have two latent variables $Z_1, Z_2$ such as model K. Its advantage over the single latent variable was not obvious. In consequence, we believe that the scalability issue will not be a major drawback for practical datasets which may not need a large number of graphical nodes. Nevertheless, we admit that the current exhaustive exploration shall be improved by more sophisticated criteria for efficient exploration. We left this challenging problem as a future work to tackle. We added some discussion on this scalability issue in the revised draft accordingly.\\n2. On interpretation of the learned structures: In our opinion, AutoBayes models perform much better on physiological datasets, such as RSVP, MI, ErrP, Faces and ASL, where inference is highly dependent on the biological state of each subject. These datasets are also event related EEG and EMG signals with notoriously low signal-to-noise ratio. It is intuitive that CVAE provides comparable performance with AutoBayes on QMNIST and Stress datasets which are less subject to variations in subject identities and have higher signal-to-noise ratio. \\n3. On fairness of experiments: Thank you for your important comment. We added a detailed comparison of the graphical models in different hyperparameter configurations (different number of hidden layers and number of nodes in each layer) to discuss the fairness. From the accuracy vs. number of model parameters plot as outlined in Fig. 9(b), it is evident that more complexity will lead to better performance while AutoBayes can still outperform the standard classifier (simple model A) at the same space complexity in higher accuracy regimes. It suggests that exploration of various inference strategies that best fit each generative model has significantly more contribution to model accuracy than increasing the depth or nodes of the neural networks of a single graphical model. More analysis of trade-off between complexity and accuracy is added in section A.10. \\n4. On empirical comparison: We tested ensemble models from different initializations but same architecture upon request by AnonReviewer2. The performance gain in the experimental results was marginal. Additionally, ensemble of even $3$ or $4$ best different graphical models is highly more efficient than the ensemble of a single graphical model with various different arrangements of hidden layers and nodes, while also performs far better in accuracy. It suggests that parallel activity of vast assemblies of different graphical models is more useful in ensembling. We added the relevant discussion in manuscript.\\n\\nPlease do not hesitate to post further comments or questions.\"}",
"{\"title\": \"Enumerating all possible graphs can achieve better performance on EEG datasets, but with potential issues on scalability.\", \"review\": \"**Update**\\nI have read the author's rebuttal, and happy to see that a discussion regarding parameters is added (Figure 9). Other than that, my personal concern is similar to Anon Review 3's -- it seems that the core idea of the paper is drowned in too many technical details (granted, many of these are needed in order to implement this correctly). I wonder if a clearer discussion can be made like this -- you have a variational inference problem with certain independence assumptions, so write this out in the most abstract manner possible. To come up with a concrete objective, the question then becomes \\\"given the factor graph, how do we add the networks (this basically lines 11 - 16 in Algorithm but not in the flow of the main text)\\\". I think a better presentation and clarity in the main paper would greatly help acceptance.\\n\\n**Overview**\\nThe paper proposes AutoBayes, which enumerates all the plausible graphical models between data, label, and nuisance variables, remove redundant edges with d-separation rules, and learns neural networks to represent the parent-to-child information.\\n\\n**Strengths**\\nEmpirical results are very strong for certain EEG datasets such as ErrP and RSVP. It appears that on the EEG datasets, different graph structures would have very different performances so there might not be a graph structure that works equally well on all of them, which motivates the need for searching for the optimal graph structure.\\n\\n**Weaknesses**\", \"scalability\": \"unlike existing methods in AutoML, AutoBayes does not seem to attempt to optimize the process from which graphs are selected (i.e. pruning of graphs that are unlikely to work well), resulting in the need to enumerate all possible graphs (where the complexity is doubly-exponential with respect to the number of variables). This means that the method will have difficulty even scaling to a small amount of variables (e.g. 10).\", \"interpretation_of_the_learned_structures\": \"it seems that on some datasets, CVAE already provides comparable performance to the best AutoBayes architecture, and on others best AutoBayes architecture perform much better. Can we extract any insight from the AutoBayes procedure?\", \"fairness_of_experiments\": \"for each component of the graph, the network structure is the same; therefore, compared to a structure X->Z->Y, we have fewer parameters if we use X->Y. It is unclear as to whether the empirical improvement can be gained simply by representing edges with larger networks.\", \"empirical_comparison\": \"it appears that model ensembles have a significant effect over the performance. However, since one can also ensemble models from different initializations but the same architecture, it is unclear what the performance gap would become if we also do ensemble on one model but with different parameters.\\n\\n**Minor suggestion**\\nPerhaps spend some text describing what is special about EEG datasets, and why do we expect having to predict nuisance variables to improve performance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting idea but further motivations and experimental discussions are needed.\", \"review\": \"Summary:\", \"the_paper_presents_autobayes\": \"a new approach for nuisance-robust deep learning which explores different Bayesian graph models to search for the best inference strategy. It automatically builds connections between classifier, encoder, decoder, nuisance estimator and adversary DNN blocks. The approach also enables disentangling the learned representations in terms of nuisance variation and task labels. Different benchmark datasets have been used for evaluation.\\n \\n##################################################################\", \"strenghts\": [\"The idea of automatically exploring various graphical models to select the best performing one is interesting.\", \"Overall, the paper is well written and easy to read.\", \"Ensemble learning is performed by stacking the explored graphical models allowing to improve performance.\", \"##################################################################\"], \"weaknesses\": [\"My main concern about the paper is that even though quantitative evaluation has been performed on several datasets with different modalities to show the benefit of using AutoBayes, the experimental evaluation section is not convincing enough as it lacks interpretation and analysis of the obtained results. For instance, one major problem addressed in this paper is models robustness to nuisance factors, however this was not discussed in this section. Hence, it would be good to include an experimental evaluation on this point.\", \"In Table 4 in Appendix, the simple Model B which assumes independence between Z and S performs remarkably well on task classification of the 5 first datasets since it outperforms state-of-the-art methods on all of them except QMNIST and achieves classification accuracies that are very close to the best ones (with either variational or non-variational inference), outperforming most of the presented graphical models. On the remaining datasets, Model A which is independent of S and Z, performs well compared to other graphical models especially on Faces Basics and Faces Noisy datasets. Considering these observations and perhaps the time consumption of AutoBayes, what would be the motivation of exploring all the presented graphical models (besides stacking them for ensemble learning) rather than using the simplest models A or B with good hyperparameter search? Could these results be explained by the fact that potentially some of these datasets do not present high nuisance variabilities?\", \"One of the main contributions as presented in Section 2 is the extensibility of the proposed framework to multiple latent representations and nuisance factors. Although this was demonstrated theoretically, it would be interesting to demonstrate this experimentally.\", \"One of the properties of AutoBayes is its ability to learn disentangled representations in terms of nuisance factors. In practice, how can this be evaluated?\"], \"minor_comments\": [\"Typo in Equation 6: x\\u02c6 = p_\\u03bc(z1, z2) instead of x\\u02c6 = p_\\u03bc(z1)?\", \"Steps to derive Equation 7 are not straightforward and can be more clarified.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Very interesting idea and results, but relevant previous work regarding Bayesian networks and structure learning is not discussed at all.\", \"review\": \"The authors propose a framework, called AutoBayes, to automatically\\ndetect the conditional relationship between data features (X), task\\nlabels (Y), nuisance variation labels (S), and potential latent\\nvariables (Z) in DNN architectures. Assuming a Bayesian network (BN)\\nwhich represents the (possibly) conditional independencies between the\\naforementioned variables, the authors propose a learning algorithm\\nwhich consists of applying Bayes-ball to detect and prune unnecessary\\nedges in the graph (effectively finding a subgraph, independence map\\nof the BN), train the resulting DNN architecture, and choose the network\\nwhich achieves the highest validation performance. This idea is\\ninteresting, especially compared to hyperparameter optimization\\napproaches for model tuning, and the results seem convincing.\\n\\nHowever, relevant previous work is not cited and discussed in the paper.\\nSpecifically, BN structure learning and inference in BNs (both of\\nwhich are well studied and have extensive literature) are fully\\nrelevant, but are not discussed or mentioned at all. For instance,\\nthe paper uses undefined terms such as \\\"Bayesian graph model,\\\" \\\"Bayesian\\ngraphs,\\\" and \\\"graph model,\\\" in place of Bayesian network (which is\\nrigorously defined). It is important that such related previous work\\nbe discussed to delineate what is novel in the presented approach and\\nplace its contributions within the greater context of this previous\\nwork. This inclusion would also help the presentation of concepts in\\nthe paper. For instance, the discussion surrounding equations 1\\nand 2, i.e.:\\n\\\"The chain rule can yield the following factorization for a generative\\nmodel from Y to X (note that at most 4! factorization orders exist\\nincluding useless ones such as reverse direction from X to Y )...,\\\" is\\nthe concept of an elimination ordering in the elimination\\nalgorithm for BNs (and graphical models in general). Showcasing the\\npresented work in this light, (i.e., as a natural\\ncombination of BN structure learning with macro-level neural-architecture\\noptimization) would be particularly novel and compelling.\\n\\nFinally, it is important to discuss the complexity of the presented\\nalgorithm. Given the Bayesian networks (BNs) in Algorithm 1, each\\nindependence map (and the underlying DNN\\narchitecture) must be trained then validated. This algorithm scales\\nfactorially in the number of nodes in the BN. It is great that the\\nselected subgraph performs so well (Figure 2), but super-exponential\\ncomplexity multiplied by DNN cross-validation training is going to be\\nvery hard to do as m and n grow in Algorithm 1.\", \"other_comments\": \"-The definition of nuisance and nuisance-variables are implicitly\\nassumed throughout the paper. An exact definition of what is meant by\\nnuisance, in the context, of the paper would be very helpful.\\n\\n-Algorithm 1 is mostly one large block of text, and is very hard to\\nparse on the first read.\\n\\n-In the main contributions enumerated from pages 2-3, contribution 3\\nlooks redundant given contribution 1.\\n\\n-\\\"Besides fully-supervised training, AutoBayes can automatically build some relevant graphi-\\ncal models suited for semi-supervised learning.\\\" <- please include a\\nlink to where this is discussed. The enumerated list of contributions\\nwould be a perfect roadmap for the paper (just include references to\\nsections after every contribution)\\n\\n-\\\"We note that this paper relates to some existing literature... as addressed in Appendix A.1. Nonetheless, AutoBayes\\nis a novel framework that diverges from AutoML, which is mostly employed to architecture tuning\\nat a micro level. Our work focuses on exploring neural architectures at a macro level, which is not\\nan arbitrary diversion, but a necessary interlude.\\\" <- Appendix A.1\\nshould really be included in the\\npaper, related work is a not an optional section. For instance, the\\nreader may not know what the authors mean in terms of micro versus\\nmacro level. Reading this without further explanation until later\\nin the paper, it would\\nseem that micro-level is more nuanced than macro-level and the former\\nperhaps subsumes the latter; the authors should detail what they mean\\nand distinguish their work from previous works in the main paper.\\n\\n-The terminology is very clumsy: \\\"Bayesian graph models,\\\" \\\"Bayesian\\ngraph,\\\" and \\\"graph model\\\" are not established terms and, as such,\\nshould be defined so the reader knows what type of ML method is being discussed. The authors should specify that they have a Bayesian\\nnetwork whose factorization describes the conditional relationship\\nbetween (random) variables.\\n\\n-\\\"VAE Evidence Lower Bound (ELBO) concept\\\" <- please include citation\\nfor the ELBO\\n\\n-\\\"How to handle the exponentially growing search space of possible\\nBayesian graphs along with the number of random variables remains a\\nchallenging future work.\\\" <- this is exactly structure learning in\\nBayesian networks (see the Bayesian information criterion, i.e., BIC score).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea, encouraging results, but poorly organized paper\", \"review\": \"The authors present a novel method dubbed AutoBayes that tries to find optimal Bayesian graph models for \\\"nuisance-robust\\\" deep learning. They employ the Bayes-Ball algorithm to construct reasonable inference graphs from a generative model given by iterative search. The corresponding DNN modules are then built/linked and trained using a form of variational inference with adversarial regularization where applicable. The authors also propose the use of an ensembling approach to further improve robustness of the \\\"best\\\" model.\", \"pros\": [\"Interesting and novel approach\", \"Experimental results are encouraging and seem to demonstrate a clear advantage of AutoBayes over other methods\"], \"cons\": [\"The organization of the paper is erratic; overall structure doesn't flow well.\", \"Multiple figures have details (e.g. color or style of arrows, shading, etc.) which are never explained.\", \"Little to no context is given for what nuisance-robust learning (and correspondingly, nuisance \\\"variation\\\" variables) actually is. Similarly for the datasets and baselines.\", \"No discussion of time/space complexity or computational demands, which is surprising given the apparent combinatorial complexity of the nested for-loops in Algorithm 1.\", \"Placement with respect to existing work is somewhat vague (the authors refer to \\\"similarities\\\" and \\\"relationships\\\" but do not concretely describe them).\", \"The paper needs to be proof read several more times. There are numerous grammatical errors and poorly phrased sentences. I will give a few examples below, but this is largely up to the authors to sort out.\", \"Overall, the paper reads more like a rough draft of a technical report than a standalone research article. It's confusing, difficult to read (I found that I needed to jump around and re-read a lot in order to understand what the authors were trying to say), and fails to give necessary context to readers who aren't familiar with a very specific subset of the deep learning literature. While it's of course always fine to defer to references for details, it's important that the reader can broadly understand your method and the problem it's trying to solve without hunting down a dozen references. This paper seems to assume that the reader has read every relevant paper and is arriving at this work in sequence.\", \"If the authors are willing to do some reorganizing and more diligent proof-reading, I'd be happy to reconsider my rating after seeing the revisions.\"], \"a_couple_of_examples_of_poorly_phrased_or_grammatically_incorrect_sentences\": \"\\\"It may be because the probabilistic ~relation~ *relationships* *in the* underlying data ~varies~ vary ~over~ across datasets\\\" (pg 2)\\n\\n\\\"The ~whole~ DNN blocks are trained with ~adversary~ *adversarial* learning ~in a~ *using* variational Bayesian inference.\\\" (pg.3)\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
CLYe1Yke1r | Box-To-Box Transformation for Modeling Joint Hierarchies | [
"Shib Sankar Dasgupta",
"Xiang Li",
"Michael Boratko",
"Dongxu Zhang",
"Andrew McCallum"
] | Learning representations of entities and relations in knowledge graphs is an active area of research, with much emphasis placed on choosing the appropriate geometry to capture tree-like structures. Box embeddings (Vilnis et al., 2018; Li et al., 2019; Dasgupta et al., 2020), which represent concepts as n-dimensional hyperrectangles, are capable of embedding trees by training on a subset of the transitive closure. In Patel et al. (2020), the authors demonstrate that only the transitive reduction is required, and further extend box embeddings to capture joint hierarchies by augmenting the graph with new nodes. While it is possible to represent joint hierarchies with this method, the parameters for each hierarchy are decoupled, making generalization between hierarchies infeasible. In this work, we introduce a learned box-to-box transformation which respects the geometric structure of the box embeddings. We demonstrate that this not only improves the capability of modeling cross-hierarchy compositional edges but is also capable of generalizing from a subset of the transitive reduction. | [
"Box embeddings",
"Representation Learning",
"Joint Hierarchy",
"transitive relations",
"knowledge graph embedding",
"relational learning."
] | Reject | https://openreview.net/pdf?id=CLYe1Yke1r | https://openreview.net/forum?id=CLYe1Yke1r | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"MhWcdFq-kdO",
"VqGcXOwnSVN",
"So3IRn5mj75",
"PxreRPmmH3e",
"q_lAfHcn7T2",
"jy3Hd8t7cLp",
"zMwA_nkQYwQ",
"N7Th-ffpu6B",
"hF9X6gJOTvU",
"WKVTL2Mec8a",
"uXj6EFkcKFj",
"ERtahUzE4F-",
"uUKgZ9J5mCP",
"G0mfqMT-s9y",
"nqstfm0ADkb",
"QCMSzfv8AM1",
"Khrm-tvfVEL",
"eRP16290P4b"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040429281,
1606249157651,
1606248519038,
1606233501949,
1606231435823,
1606159933481,
1606159783639,
1605798608345,
1605785602688,
1605680886595,
1605680770011,
1605680496579,
1605679890918,
1605679757391,
1603845227739,
1603533257981,
1602812982566,
1602588036718
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3684/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3684/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3684/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3684/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3684/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3684/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3684/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3684/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3684/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3684/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3684/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3684/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3684/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3684/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3684/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3684/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3684/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper is concerned with modeling multi-relational data with joint hierarchical structure. For this purpose, the authors extend box embeddings to multi-relational settings, supporting the modeling of cross-hierarchy edges and generalizing from a subset of the transitive reduction. The reviewers highlight that the paper is, overall, well-written and organized, relevant to the ICLR community, and that the proposed method offers promising experimental results. Furthermore, the author's rebuttal clarified some concerns of the initial reviews (e.g., relation to GumbelBox, comparison to additional baselines etc.) and improved the manuscript.\\n\\nHowever, after rebuttal there exist still concerns regarding the current version. Reviewers raised concerns regarding novelty, clarity, and the empirical evaluation (importantly modeling more than 2 relations; it would also be good to understand more clearly why some of the newly added multi-relational hyperbolic baselines perform worse than uni-relational Poincare embeddings). While the paper and the proposed method clearly have promise, I agree with reviewers that the manuscript would require an additional revision to clarify these points. Given the positive aspects of the paper, I'd strongly encourage the authors to revise and resubmit their work given this feedback.\"}",
"{\"title\": \"Three Hierarchical Relations:\", \"comment\": \"As requested, we have also created an evaluation involving 3 hierarchical relations:\\nIsA (hypernym + instance_hypernym)\\nPartOf (part_meronym + substance_meronym)\\nmember_meronym\\nWe include compositional edges between member_meronym and IsA in the same manner as for PartOf and IsA. Preliminary experiments on a representation task suggest a similar trend for model performance, where the box-to-box transformation model outperforms multi-relational hyperbolic baselines.\\n\\n\\n| 3-Relation Hierarchy | ---------------------------------- --------------| F1 score |\\n \\n | MuRP | | 0.17 |\\n | RotE | | 0.53 |\\n | AttH | | 0.53 |\\n | RotH | | 0.54 |\\n | Box-to-box transform | | 0.71 |\\n\\n\\nIf accepted, we will include these results in the camera-ready, along with evaluations on the other baselines as well as generalization evaluations for this three-hierarchy setting.\"}",
"{\"title\": \"Three Hierarchical Relations:\", \"comment\": \"As requested, we have also created an evaluation involving 3 hierarchical relations:\\nIsA (hypernym + instance_hypernym)\\nPartOf (part_meronym + substance_meronym)\\nmember_meronym\\nWe include compositional edges between member_meronym and IsA in the same manner as for PartOf and IsA. Preliminary experiments on a representation task suggest a similar trend for model performance, where the box-to-box transformation model outperforms multi-relational hyperbolic baselines.\\n\\n\\n| 3-Relation Hierarchy | ---------------------------------- --------------| F1 score |\\n \\n | MuRP | | 0.17 |\\n | RotE | | 0.53 |\\n | AttH | | 0.53 |\\n | RotH | | 0.54 |\\n | Box-to-box transform | | 0.71 |\\n\\n\\nIf accepted, we will include these results in the camera-ready, along with evaluations on the other baselines as well as generalization evaluations for this three-hierarchy setting.\"}",
"{\"title\": \"Lack of Hierarchical Relations\", \"comment\": \"There is a distinction to be made between a relation being semantically hierarchical and a set of edges which represent a hieararchy. While hypernymy, for example, is semantically hierarchical, that does not mean that any subset of hypernymy edges would exhibit hierarchical structure from a graph-theoretic perspective, nor that a model which is capable of modeling hierarchies should be capable of representing a given subset.\\n\\nFor example, consider taking a binary tree on N nodes, which is unquestionably hierarchical, and then take any subset of edges $S$ such that each node has out_degree + in_degree <= 1. Further, take a random subset of these edges as a training set $T$, and then evaluate on the remaining edges in $S\\\\setminus T$. There is no reason that a model which is designed specifically for modeling hierarchies (nor, indeed, *any* model) should be capable of recovering the edges in $S$, which is simply a collection of nodes which are each connected to at most one other node. The fact that they were originally from a tree is completely lost, and, probabilistically, *any* extension to a tree which contains the edges in $T$ is just as likely as any other. The probability for any given element to be the root, for example, is essentially uniform across all nodes with in_degree = 0.\\n\\nThis is an (only slightly) exaggerated version of the situation in WN18RR.\"}",
"{\"title\": \"Response to authors\", \"comment\": \"Thank you for responding to all the questions and performing additional experiments.\\n\\nHowever, an MRR score of 37.5 on WN18RR is much lower compared to the hyperbolic models (MuRP 48.1 and RotH 49.6) or even those models not specifically designed for modelling hierarchy, which leaves me unconvinced about the ability of the proposed approach to model multiple hierarchies or more than 2 relations in general. Further, the authors argue that \\\"none of the relations in WN18RR are, in fact, hierarchical\\\" which is factually incorrect, especially given that they use 4 of the relations from WN18RR in their own dataset. The WN18RR data may contain missing edges, but the relations themselves **are** hierarchical, and an embedding model designed specifically for modelling hierarchies should be capable of capturing that. I accept that the proposed model may underperform on symmetric relations from WN18RR, but I would expect it to at the very least be able to capture the hierarchical ones, which this paper unfortunately does not show.\\n\\nThus, I stand by my initial score.\"}",
"{\"title\": \"RE: Response to authors after rebuttal (cont.)\", \"comment\": \"### Response to Q2.2:\\n**In general:** The goal of this paper is to jointly model multiple hierarchical relations of sufficient depth and connectedness where there are interesting cross-relational interactions. In particular, while models that perform KBC can also be applied to this task, this is a different task than KBC, where, as we note below, the hierarchical relations are often severely disconnected.\\n\\nMore importantly, we do not claim that our model can perform KBC more generally and certainly expect existing KB embedding models to perform better on such tasks. \\n\\n**Performance on WN18RR:** Although it is not our focus, as requested we have evaluated our model on WN18RR and found that it achieves 37.5 MRR. While not SOTA performance, we find the performance acceptable, given the following:\\n1. The goal of our model is to model hierarchical relations jointly, but the KBC datasets have many symmetric relations. For example, based on our analysis, over 37.3% of the relations in the WN18RR test set are symmetric.\\n2. Moreover, none of the relations in WN18RR are, in fact, hierarchical. In [1,2] the Krackhardt hierarchy score for each relation is reported, however the name of this metric is a bit misleading, as the calculation of this metric (Appendix D in [1]) only represents the relation's asymmetry. For example, any graph where each node has at most one edge connecting it to a single other node (i.e. max path length of 1) would have a Krackhardt hierarchy score of 1, despite being a disconnected set of edges. Krackhardt actually proposed four metrics (\\u201chierarchy\\u201d, \\u201cconnectedness\\u201d, \\u201cefficiency\\u201d, and \\u201clubness\\u201d), all of which should be 1 for a hierarchical graph [3]. Based on our analysis, the hierarchical relations in all of WN18RR are comprised of a large number of shallow trees.\\n3. Finally, the split of WN18RR was created with the goal of KBC in mind, and thus the hierarchies in the training data are even further disconnected - it is essentially a forest with many small trees. On the other hand, our dataset split (details of which are in Appendix A.2) removes edges from the training set which would be difficult to reconstruct by chance while still preserving the connectivity of the combined graph with respect to these hierarchical relations.\\n\\nWe will include the Krackhardt metrics for our proposed split and how it compares to WN18RR in the camera-ready of this paper.\\n\\n**Modeling more than two relations:** As evidenced by our evaluation on WN18RR, our approach can easily be extended to more than two relations by learning additional transformations. We are currently performing experiments with member_meronym as a third hierarchy and will include results for our method as well as the baselines (including hyperbolic embeddings) in the camera-ready.\\n\\n[1] Balazevic, Ivana, Carl Allen, and Timothy Hospedales. \\\"Multi-relational Poincar\\u00e9 graph embeddings.\\\" In Advances in Neural Information Processing Systems, pp. 4463-4473. 2019.\\n\\n[2] Chami, Ines, A. Wolf, D. Juan, F. Sala, S. Ravi and Christopher R\\u00e9. \\u201cLow-Dimensional Hyperbolic Knowledge Graph Embeddings.\\u201d ACL (2020).\\n\\n[3] Krackhardt, David. \\\"Graph theoretical dimensions of informal organizations.\\\" Computational organization theory 89, no. 112 (1994): 123-140.\"}",
"{\"title\": \"RE: Response to authors after rebuttal\", \"comment\": \"Thank you for responding to our reply. We address the concerns individually below.\\n\\n### Response to Q.1:\\n**Hyperbolic Baseline Parameter Tuning:** Our hyperbolic model used 20-dimensional embeddings to provide a fair comparison (the number of parameters per entity is 20 for all models). We tuned the other hyperparameters using Bayesian hyperparameter optimization with Hyperband early stopping over the following ranges:\", \"learning_rate\": \"[1e-1, 1e-7],\", \"regularization_weight\": \"[1, 1e-7],\", \"batch_size\": \"[256, 512, 1024, 2096],\", \"negative_samples\": \"[2, 100],\", \"add_bias_to_score\": \"[True, False].\\n\\nWe have added the code (which is directly based on https://github.com/HazyResearch/KGEmb) to the supplementary zip file.\\n\\n**Why multi-relational hyperbolic does not perform as good as single-relation hyperbolic:**\\n\\nAs observed in Patel 2020, Poincare embeddings require more depth information in order to model the transitive closure of a tree. The compositional edges across two relations in our model can be viewed as the transitive closure of an augmented graph. We hypothesize that the multi-relational hyperbolic embeddings may allow more flexibility in the sort of representable graph structure across different relations. While this is more desirable for other tasks (eg. KBC), it may mean that the model is even less biased toward modeling hierarchies that interact in this way. \\n\\n### Response to Q2.1:\\n\\nWe combine various subtypes for two reasons.\\nSome are not present in sufficient quantities to evaluate independently;\", \"hypernym\": \"74401 & Instance_Hypernym: 8645\", \"part_meronym\": \"10192 & substance_meronym: 1173.\\nOn their own, instance_hypernym and substance_meronym are not hierarchical. They are simply forests with several connected components with an average max path length of directed graphs formed by them are just over 1. On the other hand, Hypernym and Part_meronym are indeed hierarchical with max path length of 19 and 5 respectively. However, combining these insufficient relations with their dominating counterparts strengthens the overall hierarchical nature of the graph, e.g., combining instance_hypernym helps hypernym to become one single hierarchy with only 1 connected component.\"}",
"{\"title\": \"Response to authors after rebuttal\", \"comment\": \"Thank you for taking the time to respond to my questions and updating the paper. I have 2 additional questions/concerns:\\n\\n1. What is the dimensionality used for the multi-relational hyperbolic models compared to box embeddings? I'm surprised by their relatively low performance, which is in some cases even worse than that for single-relational hyperbolic models, which is unexpected.\\n\\n2. Even though the dataset used by the authors contains some of the relations present in WN18RR, that does not change the fact that the dataset used contains only 2 relations. Additionally, given that the subtypes member_meronym and has_part are both represented by the HasPart relation and the subtypes instance_hypernym and hypernym by IsA, it is impossible to conclude whether the proposed model would be able to differentiate between those subtypes (e.g. member_meronym and has_part) if they were separated. Reporting results on WN18RR (or another hierarchical multi-relational dataset) would help demonstrate: (i) whether the proposed model is capable of modelling more than 2 hierarchies simultaneously; (ii) whether the performance of the model degrades when some of the relations aren't hierarhical, which is important given that is the case with the majority of real-world KG datasets; and (iii) how the proposed model compares to the hyperbolic multi-relational models on a well-established KG benchmark.\\n\\nFor now, I am reluctant to change my original rating, but I would be happy to do so if the above concerns are adequately adressed.\"}",
"{\"title\": \"Visualization\", \"comment\": \"We have added the visualization of the learned embeddings and transformations for an example extracted from the wordnet dataset. This visualization has been added to Appendix A.4 of the revised version.\"}",
"{\"title\": \"Clarification regarding Gumbel Box (cont.)\", \"comment\": \"10. Note that we discuss training in section 3.2 and prediction in section 5.3. In short, any edge from X to Y (eg. Person to Man) can be modelled as P(X|Y) =1 (eg. P(person|man) =1, would ensure \\u201cperson\\u201d box contains \\u201cman\\u201d box). In case of negatives, P(X|Y) = 0. We achieve this via gradient descent using KL-divergence loss between P(Box(X)| Box(Y)) and given P(X|Y). During inference, we predict an edge if P(Box(X)|Box(Y)) > threshold, the threshold used for this classification is determined by maximizing the F1 score on the validation set.\\n11. Yes, the order embedding model reported here is the same as that described in Patel et al. 2020. In Vilnis et al. 2018, it was observed that box embeddings have strictly greater representational capacity than order embeddings, where (for example) the min coordinate of all boxes are fixed to the origin.\\n12. In all of our experiments, the evaluation data is constructed with a ratio of positive and negative to be 1:10 as mentioned in section 5.3. (GumbelBox itself was evaluated on the dataset you mention in Dasgupta et al. 2020, section 5.3, where it is shown to significantly outperform the model in Li et al. 2019.)\"}",
"{\"title\": \"Clarification regarding Gumbel Box\", \"comment\": \"Thank you very much for your detailed comments, we have addressed them individually below.\\n\\n1. GumbelBox was introduced by Dasgupta et al., 2020, where the authors demonstrate that it solves problems related to local non-identifiability and smooths the loss landscape of prior methods (hard and smooth boxes). In that work, the authors choose a Gumbel distribution because it is min/max stable, and therefore boxes whose min/max endpoints are parametrized via Gumbel distributions are closed under intersection (i.e. the intersection of two Gumbel boxes is another Gumbel box). We have clarified these points in the background section. All of our experiments are carried out using GumbelBox, as Dasgupta et al. 2020 shows it outperforms HardBox and SmoothBox in all tasks. As you rightly point out, the same transformation can be applied to SmoothBox and HardBox, in fact Dasgupta et al. 2020 point out that SmoothBox and HardBox can be viewed as special cases of GumbelBox for specific settings of hyperparameters (i.e. zero variance / temperature). We perform a sweep over these hyperparameters, and thus our results implicitly include these models as potential special cases.\\n2. We have reworked these sentences to make them a bit clearer. Our aim was not to suggest that the Gumbel distribution over endpoints, itself, has a strong inductive bias toward modeling hierarchies, but rather point out that Vilnis et al. 2018 demonstrate box embeddings effectively model hierarchies and Dasgupta et al. 2020 demonstrate that using the Gumbel distribution over endpoints makes this even more effective.\\n3. We agree, and although GumbelBox was not the main focus of this paper (having been introduced in Dasgupta et al. 2020) we have updated the paper to include a short discussion of the Gumbel distribution.\\n4. This is true, however: (1) the meet operation serves primarily to justify the theoretical properties of the box as a lattice, (2) we do not directly train or evaluate using the meet operation, as it is not needed for our queries. That being said, for a well-trained model of hierarchies, the meet is still meaningful, as the meet of a node and one of it\\u2019s descendents would simply be the node itself, and the meet of any two arbitrary nodes will provide the smallest containing box which, itself, is contained in the closest common ancestor node.\\n5. Thank you for pointing this out, there is a typo here which we have corrected to \\u201c(a\\u22641b)\\u2227(b\\u22642c)\\u2192(a\\u22642c)\\u201d. We\\u2019ve also added \\u201c(a\\u22642b)\\u2227(b\\u22641c)\\u2192(a\\u22642c)\\u201d, which corresponds to your example (Bird HasPart Wing, Wing IsA Appendage => Bird HasPart Appendage).\\n6. This approach can easily be extended to more than two hierarchical relations by learning additional transformations, however we are not aware of any dataset which contains three or more hierarchical relations in sufficient quantity/density such that modeling all three jointly would lead to improved inference. IsA and HavePart are both prevalent and fundamental relations, however, and it is our belief that modeling them correctly will lead to benefits on additional non-hierarchical relations, which is a major aim of our future work.\\n7. Note that the gumbel beta is a global parameter which is the same for all embeddings. (Dagupta et al. 2020 mention this is a requirement for GumbelBox to be closed under intersection.) In our experiments, we follow Dasgupta et al. 2020 and tune \\u03b2 on a validation set using Bayesian hyperparameter optimization. While it is possible to learn \\u03b2 via gradient-descent on the training set, it is likely that this would also quickly lead to local minima with very small \\u03b2 (due to the influence of negative samples), and thus it seems more appropriate for this to be treated as a global hyperparameter selected based on validation set performance, or even annealed throughout training.\\n8. We have tried more complicated transformations, including shallow MLPs, but a simple linear transformation actually outperforms them. This is likely due to a fundamental difference in the way that MLPs interpret their input (encoding information using the vector-space structure) and the way these vectors are used in the GumbelBox model, where they are eventually used to calculate ratios of expected intersection volumes.\\n9. One way to think about this is that the \\u201cIsA-Bird\\u201d box represents all things which are birds, and the \\u201cHasPart-Wing\\u201d box represents all things which have wings. A bird is something which has a wing, so it belongs in the \\u201cHasPart-Wing\\u201d box. A box for \\u201cHasPart-Bird\\u201d, on the other hand, would represent all things which have birds as a part of them, so perhaps the \\u201cIsA-Flock\\u201d would be inside this box (if such a Flock node existed). We have clarified this point in the updated version of the paper.\"}",
"{\"title\": \"Hyperbolic graph embeddings evaluated for modeling joint hierarchies\", \"comment\": \"Thank you for your detailed comments, we will address them invidividually below.\", \"sec_2\": \"1. We agree that KB completion often requires modeling hierarchical relations, and moreover we evaluate KB completion models as our baselines for this task. Our intent was not to say they are entirely unrelated, but rather to simply point out the differences, which, as you point out, are mostly in the amount of emphasis on hierarchical relations and modeling goals. In short, we believe we are essentially in agreement on this point, and have attempted to clarify the wording of this in the paper.\\n2. Thank you for suggesting the hyperbolic embeddings for multi relational knowledge bases. We have added them to the related work section, and also evaluated them as baselines. (Also discussed again below.)\", \"sec_3\": \"As you point out, the meet is not actually the union, it is the smallest containing box, however (1) the meet operation serves primarily to justify the theoretical properties of the box as a lattice, (2) we do not directly train or evaluate using the meet operation, as it is not needed for our queries. That being said, for a well-trained model of hierarchies, the meet is still meaningful, as the meet of a node and one of it\\u2019s descendents would simply be the node itself, and the meet of any two arbitrary nodes will provide the smallest containing box which, itself, is contained in the closest common ancestor node.\", \"sec_4\": \"We have updated the image and explanation. The lack of a transformation on f_1(b) means that the transitive relations present in the IsA hierarchy interact exactly as desired with the HasPart hierarchy to encourage the composition edges. For example, since Dove is contained in Bird, if Bird is contained in HasPart-Wing, then Dove will also be contained in HasPart-Wing. A transformation on f_1(b) would add flexibility to the model but would no longer guarantee these compositional edges. Furthermore, if we think of the Bird box as representative of \\u201call things which are birds\\u201d and the HasPart-Wing box as \\u201call things which have wings\\u201d, then we don\\u2019t want or need an additional transformation on the Bird box in this scenario, as \\u201call things which are birds\\u201d is a subset of \\u201call things which have wings\\u201d.\", \"sec_5\": \"1. We have run additional experiments using MuRP[1], RotH, and AttH[2], and updated Table 2 and 3 with the results. We observe that the RotH and AttH models were able to learn the joint hierarchy to some extent, however, their generalisation performance is poor. We are also running RotE (the euclidean embedding version of the RotH) to investigate how much the inductive bias of the hyperbolic embeddings is helping in this task.\\n2. The hierarchical relationships in WN18RR are member_meronym, has_part, instance_hypernym, and hypernym, which are already present in our dataset (member_meronym and has_part are both HasPart, instance_hypernym and hypernym are IsA). The other dominant relationships are _derivationally_related_form, _verb_group, and _similar_to, which are symmetric in nature. Furthermore, more than 90% of the evaluation data coming from these relations have a reverse edge present in the training data, which makes modeling these relations trivial [1]. Most of the models achieve 0.93 MRR performance on this subset, including our method that has no inducative bias towards modelling symmetry.\\nRemoving this trivial symmetric subset and focusing exclusively on hierarchical data in the training split WN18RR yields a large number of connected components, as opposed to a deep hierarchy, and thus is not suitable to our goals of assessing the ability of a model to handle hierarchical relations.\\n[1] Pezeshkpour et.at. Revisiting Evaluation of Knowledge Base Completion Models, AKBC 2020.\\n3. In the overfitting task, we train on the whole transitive reduction and predict the performance of the composite edges. However, in the generalisation task we provide a subset of the hierarchies as training data and try to predict on the composition edges and missing edges as well. Thus these numbers are not a direct indication of the generalization gap since the training data is different for these two settings. For this reason, we study the generalization performance for different parts of the dataset in detail in section 5.5.\\n4. We are generating a visualization of the learned embeddings and will include it in the paper shortly.\"}",
"{\"title\": \"Simple but effective\", \"comment\": \"Thank you for recognizing our contribution and promising experimental results as shown in the paper, and for providing the specific corrections, we have updated the paper accordingly. Although the proposed method is simple, prior to this work it was unclear how to effectively enable sharing of parameters between boxes for the purpose of transforming one graph to another. Based on other reviewer\\u2019s requests, we have also run additional baselines, including recent hyperbolic embedding methods, and find that our relatively simple model significantly outperforms them. A further contribution of our paper is the additional analysis of the model\\u2019s ability to generalize. Patel et al. 2020 was purely a representation task, which was appropriate for the model structure proposed, however sharing parameters allow us to evaluate our model for generalization capability, and we include a thorough breakdown and analysis of various types of generalization this model is capable of performing.\"}",
"{\"title\": \"Thank you!\", \"comment\": \"Thanks R1 for recognizing our contribution in proposing this novel method and promising experimental results in representing tree-like structures!\"}",
"{\"title\": \"An interesting paper.\", \"review\": \"This paper deals with tree-like structure embedding with box embedding on the lattice (poset). This paper is well-motivated and well-presented. Though there is a limitation on data structure, this paper still presents a novel idea in this area. This method also achieved promising results in experiments. Thus, I would like to recommend to accept this paper.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Seemingly incremental contributions, but with significant empirical improvements.\", \"review\": \"This paper builds upon the work of Patel et al. (2020) in modeling two hierarchies jointly within the box embedding framework. It also incorporates the GumbelBox formulation of Dasgupta et al. (2020) to resolve local identifiability issues during training.\\n\\nThe contribution of the paper seems to only lie in the learning of a function \\\\phi that maps entity boxes to HasPart-* boxes. This function constrains the HasPart-* boxes in two ways: (a) their \\\"minimum\\\" corners remain at the same relative positions as their corresponding entity boxes, and (b) their lengths are scaled proportionately in each dimension. In contrast, Patel et al. (2020) does not have these constraints in their model. I find the novelty of these constraints to be incremental, especially in view that the joint hierarchy problem and evaluation methodology have already been formulated by Patel et al (2020) in the context of box embeddings. Though seemingly straightforward, the constraints do help the paper to improve upon the state of the art by significant margins in the experiments. \\n\\nThe paper is well organized and clearly written for the most part, but the exposition can be improved in some areas.\\n\\n* Section 4.1, (a <_1 b) ^ (b <_2 c) => (a <_2 b): Could the authors provide examples of what <_1, <_2, a, b, and c represent? \\nI interpret \\\"a <_1 b\\\" to be b IsA a, \\\"b <_2 c\\\" to be c HasPart b, which then leads to \\\"c HasPart a\\\". This means that the consequent should (a <_2 c) rather than (a <_2 b), no? \\n\\n* Section 1, 3rd para: hiearchy->hierarchy, \\n\\n* Section 1, 4th para: dialate->dilate\\n\\n* Section 5.3, 1st para, \\\"for those edges in table 5\\\" -> should be \\\"table 2\\\"?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"nice experimental results, lack motivation and training details\", \"review\": \"The paper focuses on modeling multiple hierarchical relations on a heterogenous graph. The task \\u201cmodeling joint hierarchies\\u201d is essentially trying to infer whether a given pair of entities has a hierarchical connection especially when there exists multiple hierarchical relations (2 in the paper), and missing links. The paper proposes to embed entities using boxes whose endpoints follow the Gumbel distribution. Given there exists two hierarchical relations, the paper transforms the box of one entity under relation 1 to the box of the entity under relation 2 with a parameterized linear function. This is in contrast to previous work that parameterized the box of two relations using separate independent parameters.\\n\\nThe model seems sound, however I have two major concerns. (1) I do not think the model is motivated well, especially on why the model uses Gumbel distribution to parameterize the box. (2) The paper has no introduction how they train the model and use it for inference, what is the loss? This makes it hard to evaluate the correctness of the model.\\n\\nI am very satisfied with the extensive experiments the paper has conducted. They include many strong baselines including the order embeddings, hyperbolic embeddings and even some KG embeddings. The results on the KG embeddings clearly show that their methods work much better in this (a little specific) hierarchical relation modeling setting. The paper also introduces a new missing-edge setting, where they show that joint modeling achieves better generalization than independent parameters. \\n\\nSome detailed questions are listed below.\\n1. The related work states the difference between modeling hierarchies and knowledge base completion, however, it lacks discussion how their Gumbel box is different from previous box embedding methods (this should be added in the second paragraph). I understand the difference between the Gumbel box and the Two-box model, namely the Two-box model learns independent parameters. However, I did not find the discussion on the connection between the Gumbel box and hard/smooth box. Why cannot we apply the same transformation idea to previous hard and smoothed box embeddings so that they can also model joint hierarchies without optimization issues? Why is Gumbel distribution special and useful in parameterizing the boxes and modeling hierarchies?\\n2. The paper has some vague sentences like \\u201cthe authors demonstrate that this method of training boxes leads to better representation of trees thus we will use this Gumbel box approach in our setting.\\u201d and \\u201csince gumbel boxes effectively model hierarchies, we would like to benefit from the inductive bias of this model for intra-relation edges and thus we seek to learn a function ...\\u201d, but what is the inductive bias of Gumbel? It\\u2019s better to clearly state it.\\n3. The paper lacks a short discussion and introduction to the Gumbel distribution in the background section, especially on the parameters \\\\mu and \\\\beta.\\n4. As defined in Eq. 3, the meet of two boxes may include some blank space that does not belong to the input boxes, do you think this will have any issues, especially when the two input boxes are far away from each other?\\n5. Sec 4.1, first paragraph, \\u201c$(a \\\\leq_1 b) \\\\wedge (b \\\\leq_2 c) \\\\to (a \\\\leq_2 b)$\\u201d is wrong. Bird has part Wing, and Wing is an Appendage, but Bird is not a Wing.\\n6. Sec 4.1, end of page 4, \\u201cTo simultaneously model a second relation, we ...\\u201d, so the model can only model two hierarchical relations? If so, I think it is a little limited and can the model provide a way to generalize beyond two hierarchical relations?\\n7. Sec 4.1, \\u201cthe free parameters are $\\\\mu_i$ and $\\\\Delta_i$\\u201d, why does the model not learn $\\\\beta$?\\n8. As in Eq. 11 and 12, the transformation is a rather simple linear transformation, have you tried something that is more complex, e.g. a MLP? \\n9. I am also confused by Remark 1 and Eq. 8. For Bird, there should be two boxes where one represents the IsA relation and the other represents the HasPart relation, right? Then in Figure 1, why is the IsA-Bird box inside the HasPart-Wing box, I think it should be the HasPart-Bird box inside the HasPart-Wing box.\\n10. The paper does not introduce how to train the model or even how to make predictions during inference in Sec 4. I understand the page limit but these two aspects are essential to a machine learning model.\\n11. What is the difference between the two-box model and the order embeddings in the experiments? I assume if you apply the order embeddings to this multi-hierarchical relation setup, then it is the same as the two-box model?\\n12. I am curious about the performance of the proposed model in an imbalanced dataset (as introduced in Li et al. ICLR 2019), where the ratio of positive and negative is 1:10?\", \"minors\": \"The paper does not have grammar mistakes and here are some minor points.\\n1. Make it explicit in the introduction that the \\u201cTwo-Box Model\\u201d is referred to Patel et al. (2020)\\n2. The definition of box lattice model is not self-contained in Eq. 1, what is $x_i$ and $x^i$? I guess it is the two end points of the box in one dimension. Better to state it clearly.\\n3. Sec 3.3, \\u201cFor example, as shown in 1, based on\\u2026\\u201d -> \\u201cFor example, as shown in Figure 1, ..\\u201d\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Promising paper with lack of experimental support\", \"review\": \"This work proposes to model multiple hierarchical relations using box embeddings, motivated by the natural transitivity property of the containment between regions in region-based representations. The proposed model is evaluated on a dataset containing two relations (is-a and has-part). Although the proposed model shows promise by outperforming several baselines on the above mentioned dataset, I believe that the paper is not ready for publication in its current form, mainly due to (i) missing comparison to the highly relevant line of work on hyperbolic embeddings of hierarchical multi-relational data; and (ii) lacking additional experiments on a dataset with more than 2 relations.\", \"detailed_comments_and_questions_for_the_authors\": \"Sec. 2\\n1. I find the claim that \\\"Modeling joint hierarchies is not quite the same as knowledge base completion.\\\" to be unsubstantiated. This is true to an extent, since the ultimate goal of KB completion is inferring which other facts are true based on existing ones. However, to achieve this goal, KG completion models need to learn entity and relation representations which capture various properties of entities (e.g. semantics) and relations (e.g. transitivity, symmetry, etc.), which is very similar to the main idea of this work.\\n2. A whole line of very relevant work on hyperbolic embeddings of hierarchical relations in knowledge bases is missing [1, 2].\\n\\nSec. 3\\\\\\nIf I understood Definition 2 correctly, the meet (i.e. union) of two boxes will be another box which in most cases contains an area that is not part of either boxes (since a union of two boxes is not necessarily a box). Doesn't this introduce errors into box embeddings which increase with increasing the dimensionality of the embeddings?\\n\\nSec. 4\\\\\\nIt is not clear to me why the lack of a transformation on f_1(b) encourages the containment in figure 1. Could you please explain this point further?\\n\\nSec. 5\\n1. While the achieved results seem impressive, as mentioned above, a highly relevant comparison to [1] and [2] is missing. Both missing works are embedding models that represent multiple simultaneous hierarchies in hyperbolic space and where entity embeddings are shared across relations, which should lead to better generalisation on missing edges (as claimed by the authors).\\n2. The authors evaluate the proposed model on a single dataset with only 2 relations. The proposed model should be evaluated on at least one more dataset, e.g. WN18RR [3], since [1] show several relations in that dataset to be hierarchical.\\n3. I'm surprised that the improvement over the TWO-BOX model is lower when testing for generalisation capability (5%) than in the original setup (8.5%), given the original premise that the proposed model should benefit from sharing information across hierarchies.\\n4. It would be nice to see a visualisation of the learned embeddings.\", \"minor_comments\": \"\\\\\\nBackground section should be made shorter, especially the part regarding the probabilistic box model training, which is not that relevant to the overall goal of this work. This space could be used for additional experiments proposed above.\\n\\n[1] Balazevic et al. Multi-relational Poincar\\u00e9 Graph Embeddings, NeurIPS 2019\\\\\\n[2] Chami et al. Low-Dimensional Hyperbolic Knowledge Graph Embeddings, ACL 2020\\\\\\n[3] Dettmers et al. Convolutional 2D Knowledge Graph Embeddings, AAAI 2018\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
S7Aeama_0s | QRGAN: Quantile Regression Generative Adversarial Networks | [
"Sunyeop Lee",
"Tuan Anh Nguyen",
"Dugki Min"
] | Learning high-dimensional probability distributions by competitively training generative and discriminative neural networks is a prominent approach of Generative Adversarial Networks (GANs) among generative models to model complex real-world data. Nevertheless, training GANs likely suffer from non-convergence problem, mode collapse and gradient explosion or vanishing. Least Squares GAN (LSGANs) and Wasserstein GANs (WGAN) are of representative variants of GANs in literature that diminish the inherent problems of GANs by proposing the modification methodology of loss functions. However, LSGANs often fall into local minima and cause mode collapse. While WGANs unexpectedly encounter with inefficient computation and slow training due to its constraints in Wasserstein distance approximation. In this paper, we propose Quantile Regression GAN (QRGAN) in which quantile regression is adopted to minimize 1-Wasserstein distance between real and generated data distribution as a novel approach in modification of loss functions for improvement of GANs. To study the culprits of mode collapse problem, the output space of discriminator and gradients of fake samples are analyzed to see if the discriminator guides the generator well. And we found that the discriminator should not be bounded to specific numbers. Our proposed QRGAN exposes high robustness against mode collapse problem. Furthermore, QRGAN obtains an apparent improvement in the evaluation and comparison of Frechet Inception Distance (FID) for generation performance assessment compared to existing variants of GANs. | [
"Quantile Regression",
"Generative Adversarial Networks (GANs)",
"Frechet Inception Distance (FID)",
"Generative Neural Networks"
] | Reject | https://openreview.net/pdf?id=S7Aeama_0s | https://openreview.net/forum?id=S7Aeama_0s | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"fMr9fD0zsvc",
"uCE0xjCy9eR",
"MFj2Hhes3kZ",
"6DDc5KD-s0X",
"QWcD-Tnwj3D",
"CnE9EdL8aQT"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040512520,
1604548906841,
1603992902031,
1603950845135,
1603878000079,
1603589455945
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3682/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3682/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3682/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3682/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3682/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"All the reviewers agree that this paper was poorly written, which I agree upon my own reading of this paper. Section 1 is rather telegraphic and difficult to comprehend. Section 2 is cryptic in several respect, including what space of probability distributions the authors consider the Wasserstein distance, what QR task the objective function (5) for the discriminator corresponds to, especially after letting $a=+\\\\infty$ and $b=-\\\\infty$, and so on. The numerical experiment results do not seem convincing enough to demonstrate advantage of the proposal over existing methods. The authors did not respond to the reviews, so that many concerns raised by the reviewers have not been resolved. I would thus recommend rejection of this paper.\"}",
"{\"title\": \"The quantile regression to GANs looks new, but not well demonstrated by either theoretical or empirical evidence\", \"review\": \"\\u2013 Summary \\u2013\\n\\nThe paper proposes a new GAN method that applies the quantile regression of reinforcement learning into GAN and aims to show this helps to estimate the 1-Wasserstein distance better without gradient regularization. The idea of quantile regression presented in the paper is a way to match two distributions like WGAN-GP yet at a more grained level and need no regularization like WGAN-GP. The experiments are conducted on 2D-toy examples (Ring-8, Grid-25) as qualitative results and three other image datasets (CIFAR-10, LSUN-Bedroom, Cats) with FID scores. The proposed method is compared with some GANs baselines: SNGAN, LSGAN, and WGAN-GP. \\n\\n\\n\\u2013 Strength \\u2013\\n\\nS1 - The paper proposes a new idea to apply quantile regression into GANs.\\n\\n\\n\\u2013 Weakness \\u2013\\n\\nW1 - The paper is not well-written, and the paper representation is not good.\\n\\nW2 - The performance of the proposed method does not look outperforming the WGAN-GP even though the paper strongly claims the robustness of this method. As shown in Fig. 4, 5, the proposed method converges faster, but is not necessarily better than WGAN-GP at the end. It looks WGAN-GP converges much more stable than the proposed method.\\n\\nW3 - It does not make sense why WGAN-GP is so bad on Cats dataset as shown in Fig. 6. It could be just the problem of parameters-tuning?\\n\\nW4 - It's unclear why Fig. 2 misses the WGAN-GP?\\n\\nW5 \\u2013 The paper does not convince me why the proposed method is better than WGAN-GP in either theoretical and empirical results. In addition, the paper does not provide sufficient theoretical content to show the 1-Wasserstein distance is the same as minimizing quantile values as claimed.\\n\\nW6 - Many mathematical notions are not explained, e.g., What is $\\\\rho_{\\\\hat{\\\\tau}}$ in Eq. 4? How do the authors implement with $a = \\\\infty$ and $b = -\\\\infty$?. How is $o_{i, \\\\tau}$ computed?\\n\\nW7 - FID scores alone may be biased, the combination with IS is required in the experiments.\\n\\nW8 \\u2013 The experimental results are not sufficient, e.g., the results are with only standard DCGAN architecture, and the paper would need more ablation studies on some selected hyper-parameters, e.g., $a, b, N, k$ ... in the method.\\n\\nOverall, I think the paper is far to meet the conference's standard, e.g., at paper presentation, strong empirical or theoretical evidence to justify the claims. It also would need substantial revision to improve in writing. I tend to reject the paper.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Although the proposed method is interesting, there are many errors and shortcomings in the paper.\", \"review\": [\"The authors propose the Quantile Regression GAN (QRGAN) to minimize the 1-Wasserstein distance between the real and generated data distributions. The proposed method avoids the mode collapse problem and obtains an improvement in the FID score compared to some existing GANs.\", \"Pros:\", \"Compared with NSGAN and LSGAN, the proposed method avoids mode collapse and achieves better performance than them in the FID score.\", \"In addition, compared to WGAN-GP, it achieves better FID with less training iterations while maintaining comparable performance.\", \"Cons:\", \"Overall, there are so many typos and grammatical errors in this paper that they make it difficult to understand the content of the paper. The authors must look for these errors and correct them.\", \"Also, the way of citing figures and tables is inappropriate. For example, Fig. 1 is not cited in the main text, and figures and tables in appendixes such as Fig. 7 and Table 2 are cited without specifying that they are in appendixes. Reviewers don't need to read the appendixes, so the content should be complete in the main text.\", \"The authors state that the relationship between quantile regression and 1-Wasserstein distance is shown in section 2.1, but this is not explicitly shown. In particular, the authors state in section 2.1 that \\\"Here, minimizing 1-Wasserstein distance is same to minimizing distance between quantile values\\\", but Eq.1 and Eq.2 are simply p-Wasserstein distance and 1-Wasserstein distance, so it is unclear which equation represents the relationship. Also, the period in Eq. 3 should be a dot (multiplication).\", \"In Eq. (4), you state that a and b are set to +\\u221e and -\\u221e respectively, but how were these infinities implemented in practice?\", \"Why is there no WGAN-GP result in Figure 2? My understanding is that QRGAN minimizes 1-Wasserstein like WGAN-GP, so the result is almost the same. And why didn't the authors include unrolled GAN and VEEGAN results for comparison, even though they performed the same experiments as these papers?\", \"If the authors claim that WGAN-GP is computationally expensive, they should show how much less expensive it is in QRGAN. QRGAN also requires the sum of multiple quantile values, so the more of them, the longer it should take to compute them. Also, as far as I read, there is no indication in the paper of how the number of quantile values was set up in the experiment.\", \"Looking at Figure 4 and Figure 5, GRGAN appears to be less stable than WGAN-GP. Why is this?\", \"In section 3.2, the authors should show the image actually generated by GANs.\", \"Minor comments:\", \"It is difficult to read because the author's citation is not enclosed in parentheses.\", \"Some parts of the random variables are in bold type and others are not. These notations should be consistent.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Having the discriminator output a whole distribution. Nice idea, average paper\", \"review\": \"# General statements\\nthe core contribution of this paper is to train GAN by designing the discriminator so that it outputs a whole distribution instead of a point estimate for \\\"realism\\\". This distribution is instantiated through its quantiles, and the whole approach is thus framed in a quantile-regression framework. The nice feature is that using losses over quantiles means using a wasserstein distance, which has strong properties in a GAN setting.\\n\\nThe idea is interesting and is definitely worth investigating. I am not 100% sure it was not presented previously. But I trust the authors on this.\\n* All in all, the paper is average in terms of english usage: totally acceptable in the beginning, there is a strong degradation for the experimental section, that has been either written in haste, or by another author than the rest.\\n* The actual performance is rather disappointing, since the authors do not clearly manage to demonstrate any superiority of their proposed approach vs author (classical) methods, except on toy data. I don't see this as a real issue though, because the fact that the paper is inspiring is what I believe is most important.\\n* The quantile regression loss is not presented in a way that allows people not knowing it already to understand the paper. This must be changed.\\n* you didn't study the impact of the number of quantiles, although it looks like something you definitely should have done, since it's the core contribution of the paper.\\n\\n\\n# Detailed comments\", \"below_are_remarks_and_typos_found_along_the_way\": [\"## Abstract\", \"\\\"And we found that he discriminator should not be bounded to specific numbers.\\\" is unclear here.\", \"## Introduction\", \"\\\"Text or structured data [...] of the world \\\" awkward\", \"\\\"p(z|x), not p(z)\\\" awkward\", \"\\\"Mode collapse is caused by unstable training and improper loss function\\\" : reference ?\", \"\\\"propose to use mean square error (MSE) which does not saturate\\\": at this stage, you didn't introduce this concept of \\\"saturation\\\". And you don't tell where the MSE is applied\", \"\\\"results better models\\\" : results in better models ?\", \"\\\"Reinforcement Learning (RL):is to learn\\\": awkward. And we don't understand why you're mentioning RL at this stage. Above, you only introduced VAE and GAN for your purpose. This paragraph comprises no reference.\", \"\\\"and gradually reach to the optimal policy.\\\": typo\", \"The reason why you introduce RL becomes clearer after you introduced QR-DQN. I think that it's however a bit weird as it is framed currently. You should mention that he inspiration and motivation of your work originates from RL and some of its recent developments.\", \"\\\"Discriminators whosetarget is specified\\\": what do you mean ?\", \"\\\"mixture of gussian\\\"\", \"## Quantile Regression GAN\", \"one really needs to know the trick already to understand equation (3). You must provide a reference here and explain the relationship between quantile regression and (3) as a loss.\", \"\\\"D_{\\\\tau(batch)}\\\" instead of \\\"D_\\\\tau(batch)\\\" ? above equation (4)\", \"it reads rather uncommon to me to write that infinity (negative or positive) is the objective of the discriminator, with a regularization that constraints its magnitude. I suspect the reason for this to work is: you don't actually provoke some strong collapse of the discriminator output to a specific value (a or b), but rather enforce that it stays somewhere in the approximative range [-k/2M k/2M]. This looks like a nice trick. But I would have appreciated some discussion about it.\", \"Is there a reason why you picked 1-wasserstein rather than 2- or p-wasserstein ?\", \"We replace D\\u03c4(xreal) by \\u221e to prevent it updating to decrease the discriminator output\\\": awkward sentence.\", \"Actually, I don't really understand (7). What is \\\"x_real\\\" in the setting of trainng the generator ? You just have fake samples at this stage, and you're indeed using your discriminator for computing your los. I would have written min |a-D(x_fake)|.\", \"maybe I'm missing something, but in Alg. 1, I don't clearly understand the difference between your notation o_{i,\\\\tau} and D_\\\\thau(x_i). Aren't them the same ? I understand that in your definition of D, you average over the batch. but here in this algorithm box you use a notation D(x_i), which makes it identical with o_i as far as I understand.\", \"## Experiments and results\", \"### toy\", \"\\\"arranged in grid (5x5)\\\": the `x` doesn't render well.\", \"\\\"by normalized computed gradients by generator loss\\\": awkward\", \"\\\" ourput spaces\\\"\", \"\\\"steep slope appears\\\": slopes appear. \\\"very gentle slope appear\\\": gentle slopes appear ? \\\"gentle\\\" reads awkaward to me.\", \"This whole paragraph is extremely badly written and must be written completely, from \\\"As we can see\\\" to \\\"less affected by noise\\\". The english there is very bad, I don't understand what happened out of a sudden.\", \"I don't understand what is depicted for (d) and (e) in figure 3: since the output of your discriminator is a whole distribution, what is it exactly that you decided to plot ? Did you pick a specific quantile ?\", \"\\\"Instead, we can model a discriminator to predict distance. If discriminator predicts distance, itshould be less affected by noise.\\\" what should I understand here ? I am sorry but I really don't understand the discussion here. are you eventually discarding your model and changing it for something else that would predict a \\\"distance\\\", whatever it means ?\", \"### image\", \"\\\"the checkerboard artifacts is\\\"\", \"how many quantiles are you using ?\", \"Inspecting your results on figures 4-6, I'd say they don't look particularly favorable. i/ For CIFAR10, they look kind of similar with NSGAN and LSGAN, and eventually WGAN-GP gets better. ii/ same result for LSUN, although the proposed method looks better than NSGAN and LSGAN at the end, after much instability. still catched up by WGAN-GP eventually. iii/ for cats, your method looks totally similar to NSGAN and LSGAN, and there was apparently some problem in the finetuning of WGAN-GP, that just didn't train for some reason you should have investigated. It really doesn't look like what happend for it with the other datasets. table 1 hence should not be taken too seriously, unless you really can tell that this WGAN-GP could not be made better on this \\\"cats\\\" dataset.\", \"## Acknowledgments: should that be part of a double blind review ?\", \"## References:\", \"are not consistent. Sometimes full names, sometimes just initials. please make consistent.\", \"## Apendixes\", \"I am not sure appendix B is necessary\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Promising Idea, but experiments need more work\", \"review\": \"Summary. The paper proposes to use quantile regression as an alternative GAN loss. The idea is to evaluate the quantiles of discriminator outputs instead of just using a single scalar discriminator value: while all quantile discriminator outputs for real/fake data is pushed to +infty/-infty, the generator's cost function tries to push the quantiles of fake data to be as realistic as possible (+infty). As the regression loss leads to runaway values, a L2 regularizer is put on the discrimantor output quantiles. Experimental results show very good mode coverage on Gaussian Mixtures. Further experiments demonstrate it's in the ballpark of WGAN-GP on CIFAR-10, best on CATs, and second on LSUN-Bedroom.\", \"reasons_for_score\": [\"While the paper presents a reasonable and (to me) novel idea in GANs, the experiments fall short in comparing to the state of the art, and also the ingredients in the method are not sufficiently dissected to attain a sufficient understanding. Hence, I suggest to reject the paper in its current form.\", \"Pro.\", \"the paper proposes a simple and novel target function for GAN training, that makes intuitively sense\", \"the mode coverage on the toy Mixture of Gaussian experiments look very solid\", \"Con.\", \"experiments:\", \"on CIFAR-10 etc. lack comparison to state-of-the art (sota) methods, which is necesary to put the work in context. E.g. it should be added SN-GAN (Miyato et al.), StyleGAN(2).\", \"similar for the Mixture of Gaussian: comparison to other standard methods in the literature that tackled this problem are missing (e.g. Unrolled GANs).\", \"also, please check again WGAN-GP on cats - why is WGAN-GP seemingly not training at all (and starting at a much higher level from the start in Figure 6)?\", \"the regularizer in equation 5 pushes both discriminator for real and fake towards the same values - hence potentially counteracting stability issues in training. This should be investigated independently to understand the effect of this regularizer in isolation (can even be formulated also for standard GAN KL losses, e.g. by pushing the average to 0.5). Otherwise it remains unclear if the benefits of the methods are attributable to the quantile regression or this regularizer\", \"the exposition should be improved:\", \"pg 2: too much stuff on RL - this is not needed in the paper and should be shortened to a minimum\", \"the method could be written to be understandable more easily (e.g. give a high-level description of the intuition similar to my Summary above before diving deep into the formulae)\"], \"rebuttal\": [\"please address the points on the Con side.\"], \"minor_issues\": [\"the relation to standard divergence minimization remains unclear; i.e. in particular it remains theortically unclear if this really converges to the target distribution (the empirical results seem encouraging though)\", \"many typos and language needs improvement - please check the document carefully again\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review for QRGAN\", \"review\": \"1. This paper contains many typos, grammar mistakes, and format problems, which make it very hard to read. For most equations, there is no ending period. The citation format seems wrong.\\n\\n2. The equations (1) and (2) are for 1-dimensional random variables. They may not be suitable for high dimensional random variables. The $N$ quantile values defined in this paper are for which random variable?\\n\\n3. For the neural network for $D_\\\\tau$, does it mean the input is $(\\\\tau, X)$? So for different $\\\\tau$, they share the same weights. What is the meaning of this output?\\n\\n4. The \\\"$+\\\\infty$\\\" and \\\"$-\\\\infty$\\\" notation is confusing. Does this mean we choose a very big value or a very small value in practice? But this is very subjective now. Did you perform some sensitive analysis on the choices of $a$ and $b$?\\n\\n5. The authors claim that the WGAN training is slow. For WGAN-GP, I don't see why the training is much slower than the training of QRGAN. Did you perform some analysis on the training time?\\n\\n6. The arguments for QRGAN to overcome mode collapse is quite vague. Many GANs with the encoder structure can solve the mode collapse well. It may benefit to compare QRGAN with these methods.\\n\\n7. What is the implication of Table 1? Does that mean WGAN-GP is better than QRGAN? The generative images are not demonstrated. Other dataset such as CelebA can be applied to check the performance. Image interpolation can also be demonstrated for mode collapse situation.\\n\\n8. The Appendix B is completely not necessary. It contains only well known results.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
tij5dHg5Hk | Run Away From your Teacher: a New Self-Supervised Approach Solving the Puzzle of BYOL | [
"Haizhou Shi",
"Dongliang Luo",
"Siliang Tang",
"Jian Wang",
"Yueting Zhuang"
] | Recently, a newly proposed self-supervised framework Bootstrap Your Own Latent (BYOL) seriously challenges the necessity of negative samples in contrastive-based learning frameworks. BYOL works like a charm despite the fact that it discards the negative samples completely and there is no measure to prevent collapse in its training objective. In this paper, we suggest understanding BYOL from the view of our newly proposed interpretable self-supervised learning framework, Run Away From your Teacher (RAFT). RAFT optimizes two objectives at the same time: (i) aligning two views of the same data to similar representations and (ii) running away from the model's Mean Teacher (MT, the exponential moving average of the history models) instead of BYOL's running towards it. The second term of RAFT explicitly prevents the representation collapse and thus makes RAFT a more conceptually reliable framework. We provide basic benchmarks of RAFT on CIFAR10 to validate the effectiveness of our method. Furthermore, we prove that BYOL is equivalent to RAFT under certain conditions, providing solid reasoning for BYOL's counter-intuitive success. | [
"representation learning",
"self-supervised learning",
"contrastive learning",
"regularization",
"theory"
] | Reject | https://openreview.net/pdf?id=tij5dHg5Hk | https://openreview.net/forum?id=tij5dHg5Hk | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"SZWM3Zmn1-S",
"lVkhGm1JujR",
"73rFsHqP6Bu",
"xeNO-cW0aAP",
"-2IiV-3N9s3",
"8DkYJ2f1tcR",
"Q2qedEYua-Q",
"M5nKi6k4eZ",
"bLQo-ZBk1eq",
"Hbfpvs6x3b9",
"JGO_OAVdEQU",
"p40Znp-nO1_"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040455431,
1605978367558,
1605977670241,
1605977185917,
1605975192246,
1605973840238,
1605972402621,
1605971352989,
1604067707962,
1603848648332,
1603755792742,
1603456785303
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3680/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3680/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3680/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3680/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3680/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3680/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3680/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3680/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3680/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3680/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3680/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"Most of the reviewers and AC found many claims of this submission unsubstantiated.\"}",
"{\"title\": \"Reply to R2\", \"comment\": \"We would like to thank R2 for the valuable questions and advice. We would like to respond to your concerns one by one in this post.\\n\\n## Concern 1\\nThanks for pointing out that our paper\\u2019s title oversells the contributions we made in our work. We sincerely apologize for that. While besides the title itself, we are meticulous in the statement we make in the main text of our paper: we avoid claiming we solve the problem why BYOL works, and we further point out that RAFT is just \\u201cconceptually working\\u201d, but its success is also dependent on the predictor. As for the reason why our main focus is on the predictor instead of other elements, we would like to give an informal explanation. \\n\\n**Why we don\\u2019t analyze BN in the predictor:** the blog post[1] which motivates the author of [2] states in its main content clearly shows that when the BN is removed from the predictor, BYOL continues to work (refer to Table \\u201cPerformance for each variation\\u201d). \\n\\n**Why we don\\u2019t analyze BN in general:** firstly according to the appendix of the blog post[1], when the BN is both removed from the predictor and the projector, training for longer epochs would make the model recover. Secondly, the author of [2] also states in the openreview reply that the BN is just a \\u201csufficient condition\\u201d, which might not be crucial to the success of BYOL. Besides, the original author of BYOL also published another version of BYOL that doesn\\u2019t require the batch statistics at all[3], indicating that BN might not be the most essential component. \\n\\n## Concern 2 \\nWe don\\u2019t focus on the relative performance between three algorithms we evaluate in the paper (BYOL, BYOL\\u2019, and RAFT). However, the only comparison that\\u2019s crucial to our conclusion is the comparison between the algorithms and the random baseline. We focus on whether the algorithm works or not, e.g. whether the representation collapse happens. RAFT is a more conceptually working algorithm lies in one analysis and one experimental result:\\n\\n**Analysis:** In Figure 2b (2a in the latest version), when two samples are mapped to have initial different gradient, the RAFT would separate them in the next couple of iterations. \\n\\n**Experimental Result:** In Table 1, the RAFT loss remains to be effective regularizing the alignment loss, which indicates that RAFT is more unified learning objective compared to BYOL and BYOL\\u2019. \\n\\n## Concern 3\\nDue to the limitation of our computational resources, we don\\u2019t train our method as [4] in 1000 epochs. Instead we train them in 300 epochs, which could be told by the trend of the accuracy doesn\\u2019t achieve the best performance since BYOL performs better when trained longer. More importantly as we stated in the response to Concern 2, we only care whether the algorithm outperforms the random baseline, since that\\u2019s our main focus on explaining how BYOL avoids the representation collapse. \\n\\n## Concern 4\\nThank you for pointing out that evaluating the algorithms by other large-scale datasets would make our work more solid. We are trying to gather more computational resources to evaluate our proposed RAFT and BYOL. However we would like to also point out that based on the experimental results of BYOL paper and our paper, the dataset would not change BYOL from collapse to non-collapse, nor in the other way. Thus in this respect, the dataset is not so crucial to our final conclusion. Again, we thank the reviewer for this advice and we would improve the solidness of our work in the near future. \\n\\n## Concern 5\\nThank you for noticing this phenomenon (which shows that you understand our work from the alignment-uniformity framework, and that\\u2019s what really makes us happy)! Unfortunately, this is also unclear to us. Our paper disentangles the analysis of BYOL into two separate parts: \\n- When is BYOL approximately equivalent to BYOL\\u2019?\\n- How does the predictor help produce the representation uniformity? \\nWhile these two problems are also challenging, our main contribution is that we provide empirical results supporting the legitimacy of these two parts. We don\\u2019t expect that we completely solve them in a single paper, which means we leave the job of analyzing them to the future. \\n\\n\\n[1] Abe Fetterman & Josh Albrecht. \\u201cUnderstanding self-supervised and contrastive learning with \\\"Bootstrap Your Own Latent\\\" (BYOL).\\u201d https://untitled-ai.github.io/understanding-self-supervised-contrastive-learning.html\\n[2] Tian, Yuandong, et al. \\\"Understanding Self-supervised Learning with Dual Deep Networks.\\\" arXiv preprint arXiv:2010.00578 (2020).\\n[3] Pierre H. Richemond et al. \\u201cBYOL works even without batch statistics.\\u201d arXiv preprint arXiv:2010.10241 \\n[4] Ermolov, Aleksandr, et al. \\\"Whitening for Self-Supervised Representation Learning.\\\" arXiv preprint arXiv:2007.06346 (2020).\"}",
"{\"title\": \"Reply to R3 (part 3/3)\", \"comment\": \"### Result (4)\\nFirstly, thank you for the advice that adopting other datasets other than CIFAR10 would increase the reliability of our experimental results and we are aware of that. We are currently actively searching for more computational resources to validate the effectiveness of our RAFT algorithm, while we would like to point out that the data distribution (dataset) will not change whether a method collapses or not. If any, as mentioned by the reviewer, the poor quality of CIFAR10 (\\u201cfew classes, small images, few discriminative features\\u201d) only increases the difficulty for an algorithm to work. Most importantly, being aware of the shortcoming of the dataset, we never claim the superiority of the RAFT in terms of its better capability generating good-quality representations. Instead, the RAFT loss is more unified from the perspective of algorithm designing: we know alignment and uniformity are two terms regularizing each other, and we want to leverage the EMA of the history models, would we choose to fit to it or run away from it? The answer is pretty clear. \\n\\nSecondly, the same logic applies when we address the concern you raised: \\u201cBYOL was not correctly tuned.\\u201d The optimizer doesn\\u2019t affect the property whether the model collapses or not, which is demonstrated by the BYOL original paper and our experimental results. The cosine decay trick is only crucial in terms of the final performance: with or without it, the model effectively works better than the random baseline. Therefore we don\\u2019t consider discussing them as we focus on the essential component making BYOL avoid collapse. \\n\\n### Shortcut (1)\\nWe sincerely apologize if our phrasing causes your misunderstanding on our claim that \\\"predictor is a dissatisfactory property of BYOL\\\". When we say \\\"dissatisfactory\\\" we mean that the current explanation on BYOL doesn't consider the predictor seriously. By making this argument, we want to emphasize the importance of the predictor and what we did in this work was to at least show the efficacy of it with respect to the fact that the predictor helps establish the equivalence between BYOL' and RAFT. One main contribution of our work is to emphasize that there are some additional unexpected effects brought by this predictor, and the equivalence between BYOL\\u2019 and RAFT is one of them. Analyzing the efficacy of the predictor is a must if we want to fully understand how BYOL works. \\n\\n### Shortcut (2)\\nWe acknowledge that the approximate equivalence between BYOL and BYOL\\u2019 is supported only by the empirical study. While given the similarity between the two losses and the upper bounding relation, the closeness between the two is somewhat obvious. We would like to explore under what condition this equivalence holds in the near future. \\n\\n### Shortcut (3)\\nIn our paper, our claim is made on the BYOL\\u2019, which is composed of two loss terms, but not BYOL itself. Based on the previous work of alignment-uniformity framework, that the alignment is regularized by the uniformity objective, our claim that cross-model term regularizes the alignment term is self-contained. \\n\\n### Shortcut (4)\\nThe RAFT loss is better than BYOL in terms of leveraging the MT because when the predictor is removed, running away from MT remains to be an \\u201ceffective regularizer\\u201d for the alignment loss, thus is a more unified method compared to BYOL and BYOL\\u2019.\\n\\n## About Writing\\nAgain, we are extremely sorry if our writing disturbs you. Our intention of writing this paper was to put all the materials in the main text. Unfortunately, due to the limitation of length, we have to put some of the empirical evidence and theoretical proof to the appendix. We will upload another version of our work to arxiv that properly addresses our writing problem. Besides, we only mention the word \\u201cprojector\\u201d in the appendix, never in the conclusion. We believe the reviewer\\u2019s accusation of our discussing unpublished results is unintentional and we fully understand it. \\n\\nTo summarize, we are grateful that R3 provides so many important reviews and questions. However, we sincerely request a full re-evaluation of our work after our clarification on the misunderstanding.\"}",
"{\"title\": \"Reply to R3 (part 2/3)\", \"comment\": \"## Response to the Valuable Advice & Other Comments\\nWe would like to address the concerns raised by R3 in the \\u201cResults\\u201d and \\u201cShortcuts\\u201d section. As for the \\u201cWriting\\u201d, we apologize if our phrasing somehow disturbs you. We will try to mild our excitement of finding that optimizing two opposite losses would yield the same effect and shift to a more formal language. Below we list the summarization and the response, please correct us if we misunderstand your advice. \\n\\n### Result (1)\\n**Review.** The reviewer summarizes the BYOL\\u2019 upper bound loss as the distance between the \\u201cprojection and the projector\\u201d. The reviewer thinks that BYOL itself doesn\\u2019t minimize a loss due to the stop gradient, which the reviewer thinks is supported by the non-stationary distribution of the target network and the non-convergence observed in BYOL. Therefore, the reviewer concludes that analyzing and altering the loss would be wrong, which is supported by our visualization results in appendix F.1 and offers us a better direction of analyzing BYOL: gradient.\\n\\n**Reply.** Firstly, the reviewer wrongly summarizes the BYOL\\u2019 upper bound loss. In fact, the BYOL\\u2019 loss is composed of two objectives: (i) attracting the representations of the two views of the same data after the predictor and (ii) repealing the online and the MT under the same data distribution. \\n\\nSecondly, it\\u2019s hard to understand the point of the reviewer\\u2019s claim that \\u201cBYOL doesn\\u2019t minimize a loss\\u201d. What\\u2019s more, the reviewer\\u2019s theory \\u201csince the target network is non-stationary, BYOL doesn\\u2019t minimize a loss\\u201d is just stating a conditional phenomenon observed by the researcher, but is a completely wrong causal inference: if we remove the predictor from BYOL, the target network remains \\u201cnon-stationary\\u201d, but BYOL indeed minimizes a loss: the loss of BYOL quickly goes to zero and follows the representation collapse (refer to our result in appendix). Our way of analyzing BYOL starts from observing the two quantifiable metrics that have been shown crucial to the contrastive-based methods: alignment and uniformity. Though BYOL does not explicitly optimize them, we find that these two metrics are indeed optimized during training by estimating them. The whole point of analyzing BYOL lies in how we can relate its training objective to the alignment-uniformity framework, and approximately equating BYOL and BYOL\\u2019 is empirically supported, but not theoretically. We would like to discuss under what condition this approximation holds in the future, while in this paper our main focus is to provide an overall understanding framework for BYOL. \\n\\nThirdly, we would like to point out, negating the claim that BYOL is approximately equivalent to BYOL\\u2019 by the qualitative results presented in F.2 is itself a shortcut: retraining the neural network on the same dataset with different random seed would yield the close performance while probably different qualitative results. The reason why we present the qualitative results in the appendix is that we would like to show the apparent difference between the collapsed methods and the working one. \\n\\nAt last, we agree that analyzing BYOL from the perspective of the gradient would be a good direction, while our approximating and redesigning the loss function would consequently cause effects on the gradient. There is no contradiction between these two methods. We would like to investigate the problem with your advice in the near future. \\n\\n### Result (2)\\nSince the representation space is a hypersphere, the concentration and separation of the data samples are mainly influenced by the tangential component of the gradient. This condition is trivial and easy to satisfy. Suppose the unit vector $z$ is the representation produced by the online network and the unit vector $\\\\bar{z}$ is the representation produced by the MT. The original loss is: \\n\\n\\\\begin{align}\\n\\\\mathcal L = \\\\big|\\\\big| z - \\\\bar{z} \\\\big|\\\\big|_2^2\\n\\\\end{align}\\n\\nAfter applying the condition using simple mathematical techniques, the loss is changed to:\\n\\n\\\\begin{align}\\n\\\\mathcal L = \\\\frac{1}{\\\\langle z, \\\\bar{z}\\\\rangle} \\\\big|\\\\big| z\\\\langle z,\\\\bar{z}\\\\rangle - \\\\bar{z} \\\\big|\\\\big|_2^2\\n\\\\end{align}\\n\\nWhere the inner-product $\\\\langle z, \\\\bar{z} \\\\rangle$ is a scalar and doesn\\u2019t generate any gradient. Our additional experiments show that this condition also doesn\\u2019t affect whether the algorithm would collapse. Please refer to our latest version of the paper.\\n\\n### Result (3)\\nOur claim \\u201cthe predictor is essential to the collapse prevention of BYOL\\u201d is based on the observation that when the predictor is removed, the collapse happens. Other factors are also important to the final quality of the representation distribution, while they do not essentially affect whether the algorithm would collapse, which is also supported by the original BYOL paper Table.5b[1].\"}",
"{\"title\": \"Reply to R3 (part 1/3)\", \"comment\": \"Dear R3, we appreciate the patience of you reading our paper and giving such detailed feedback. There is much helpful advice and many constructive ideas in your review. **However, we feel sorry that your judgement of our work may be conditioned on the misunderstanding of the evaluation metric \\u201clinear evaluation protocol\\u201d in the self-supervised learning field (and other reviewers seem not having the similar concern).** We would like to address your concerns in our following reply.\\n\\n## Linear Evaluation Protocol and Random Baseline \\nWe are somewhat concerned about the fact that the reviewer doesn\\u2019t understand the widely used evaluation metric in self-supervised learning, which is reflected by his/her misunderstanding of \\u201crandom baseline\\u201d. In the \\u201cWriting\\u201d part, the reviewer writes, \\n> In section 3, random is ill-defined. In CIFAR10, random should be 10%, I assume that you refer to random projection. Please clarify.\\n\\nHere we would like to clarify why the \\u201crandom baseline\\u201d is not 10%.\\n\\nMost of work in the field of self-supervised learning adopts the evaluation metric called \\u201clinear evaluation protocol\\u201d to estimate the quality of the representation distribution. Normally, after training under the pretext task, we would yield an encoder network (or feature extractor). To evaluate how well this encoder network is, we fix the weights of it, and then add a linear layer on top of the encoder to train a classifier on the labeled dataset. The point of the linear evaluation protocol is to see whether the data of the same class can be mapped to the representation space so that they could be easily identified. Therefore if we randomly initialize a network and evaluate it under the linear evaluation protocol, we would normally yield better classification accuracy than the RANDOM CLASSIFIER (which has 10% of accuracy on CIFAR10), due to the natural pixel-level intra-class similarity. In our paper, we clearly stated the setting at the beginning of Section 3: \\n> The performance of BYOL original model, whose predictor $q_w$ is a two-layer MLP with batch normalization, evaluated on the linear evaluation protocol (Kolesnikov et al., 2019; Kornblith et al., 2019; Chen et al., 2020a; He et al., 2020; Grill et al., 2020) reaches 68.08 \\u00b1 0.84%. When the predictor is removed, the performance degenerates to 20.92 \\u00b1 1.29%, which is even lower than the random baseline\\u2019s 42.74 \\u00b1 0.41%.\\n\\nSome may argue that the misunderstanding of the evaluation metric is caused by our poor writing, while in the section 3 of the original paper of BYOL[1], the author\\u2019s description is of the same style of ours, which can\\u2019t be simpler and clearer:\\n> We apply this procedure by predicting a fixed randomly initialized network achieves 18.8% top-1 accuracy (Table 5a) on the linear evaluation protocol on ImageNet, whereas the randomly initialized network only achieves 1.4% by itself.\\n\\nAccording to the reviewer, since the ImageNet is of 1000-class classification task, then the random baseline should have 0.1% of accuracy instead of 1.4%, while in reality it's not the case. \\n\\nThe basic understanding of the evaluation metric is fundamental to fair evaluation. In our case, misunderstanding the linear evaluation protocol would lead to misunderstanding the concept of the representation collapse, which is more crucial to rating our contributions. And this consequential misunderstanding is also reflected in the reviewer No.3\\u2019s response. \\nIn the \\u201cResults\\u201d part, the reviewer writes, \\n> Table D.3 shows that RAFT/BYOL' does not collapse without predictors when $\\\\beta$ is high. Albeit providing low accuracy, a non-collapse is quite surprising. Unfortunately, the authors leave it for future work. \\n\\nIn fact, in our paper, we introduce the alignment-uniformity framework[2] to readers to understand the concept of representation collapse. The representation collapse means that most of the data are mapped to the same meaningless point, which can be reflected by the metric of uniformity. When the predictor is removed, BYOL, BYOL\\u2019 and RAFT are all collapsed since their uniformity loss is much higher than the random baseline: -0.14, -0.10, and -0.006 respectively, while the random baseline even has -0.51 of uniformity. \\n\\nWe believe that the basic understanding of the linear evaluation protocol and the representation collapse is crucial to the objectiveness of the review. And therefore we sincerely hope that the reviewer could re-evaluate our work after reading our reply.\\n\\n[1] Jean-Bastien Grill, Florian Strub, Florent Altche \\u0301, Corentin Tallec, Pierre H Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, et al. Bootstrap your own latent: A new approach to self-supervised learning. arXiv preprint arXiv:2006.07733, 2020.\\n\\n[2] Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. arXiv preprint arXiv:2005.10242, 2020.\"}",
"{\"title\": \"Reply to R4 (part 2/2)\", \"comment\": \"## Question 2\\n\\nWe sincerely apologize if our phrasing causes your misunderstanding on our claim that \\\"predictor is a dissatisfactory property of BYOL\\\". When we say \\\"dissatisfactory\\\" we mean that the current explanation on BYOL doesn't consider the predictor seriously. By making this argument, we want to emphasize the importance of the predictor and what we did in this work was to at least show the efficacy of it with respect to the fact that the predictor helps establish the equivalence between BYOL' and RAFT.\\n\\nWe agree that RAFT doesn\\u2019t completely solve the problem of collapse when the predictor is removed. Our claim that RAFT is a more unified objective (refer to the Figure.2(b) in the latest version of our paper) and is thus more \\u201cessential\\u201d is based on the previous work of alignment-uniformity framework[2], which demonstrates that the alignment loss and the uniformity loss are two competing factors regularizing each other. Evaluate the role of RAFT and BYOL\\u2019 from the perspective of algorithm designing: suppose you want to use another term to constrain the alignment loss which incorporates the Mean Teacher, would you choose to fit to it or run away from it? RAFT loss remains to be an effective regularizer when the predictor is removed, while BYOL\\u2019 fails to do so, which implies that RAFT is more favorable. \\n\\nYes, this conclusion still has potential to be improved by enabling the framework to work when the predictor is removed, while demanding our single paper to solve all the problems would be unfair, let alone under the 8-page limitation. Our further analysis on the efficacy of the predictor focuses on the equivalence between BYOL\\u2019 and RAFT, which makes two crucial contributions: \\nThe importance of the predictor lies in at least equating the two opposite training objectives. And we emphasize the efficacy of the predictor needs to be further studied. \\n\\nThere are multiple factors entangled in BYOL. Our contribution provides a novel view to investigate it. Under our framework, the direction of analyzing BYOL is much clearer than before: two separate questions wait to be answered in the future: \\n- Under what condition BYOL is equivalent to BYOL\\u2019? \\n- How does the predictor help optimizing the uniformity of the representation in RAFT?\\n\\nWe hope our explanation on the RAFT loss and our framework of understanding the working mechanism behind BYOL would provide a new direction of leveraging the MT in the future. We would also like to thank R4 for the two valuable comments, which we think would be helpful to our future study. \\n\\n[2] Tongzhou Wang and Phillip Isola. Understanding contrastive representation learning through alignment and uniformity on the hypersphere. arXiv preprint arXiv:2005.10242, 2020.\"}",
"{\"title\": \"Reply to R4 (part 1/2)\", \"comment\": \"Thank you for the valuable advice. It\\u2019s sad that you only noticed the weaknesses of our paper and ignored all the contributions we\\u2019ve made. We would like to address your major concerns through this reply and recapitulate our contribution.\\n\\n## Question 1\\n\\n**Question.** The reviewer thinks we are unaware of the difference between contrasting two samples (\\u201ccross-sample\\u201d) and contrasting two functions (our \\u201ccross-model\\u201d loss). Then the reviewer argues that contrasting two functions is not capable of collapse prevention by giving a corner case where the weight of the matrix is initialized to zero. The point of this \\u201czero-initialized\\u201d example is to emphasize that special care is required when dealing with the cross-model term. \\n\\n**Reply.** Firstly, the example given is baseless and overly critical. As known by every practitioner in deep learning, neural networks require a reasonable initialization scheme, and zero-initialization often completely disables a network. The reviewer\\u2019s reasoning is applicable to attacking any form of losses, even BYOL itself: if there is an intermediate layer whose weight is zero in the BYOL, then the loss would be a zero constant and nothing would be learned during training. \\n\\nWe understand the reviewer\\u2019s concern and we are prudent with our conclusions. There are some achievable conditions required when we claim that maximizing the cross-model term could prevent collapse, and the randomness of the initial representation distribution is one of them. What\\u2019s more, we shall emphasize that maximizing the cross-model loss is not unconditionally equivalent to contrasting two samples in the representation space, and we avoid claiming it in the paper. We argue that only when the EMA is considered in the target network, the cross-model loss is able to contrast samples. In section 4.2, we leverage the conclusion in the Mean Teacher[1], which is also mentioned in R3, to bridge the gap between the sample averaging and model averaging: \\n> There has been a lot of work demonstrating that weight averaging is roughly equal to sample averaging[1], thus if two samples\\u2019 representations are close to each other at the beginning and their initial updating directions are opposite, then RAFT consistently separates them in the representation space.\\n\\n[1] Antti Tarvainen and Harri Valpola. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. In Advances in neural information processing systems, pp. 1195\\u20131204, 2017.\"}",
"{\"title\": \"Reply to R1\", \"comment\": \"Dear R1,\\n\\nThank you for the kind and constructive comments. We are also aware of your concern whether RAFT is comparable to the SOTA methods when applied to the large-scale datasets such as ImageNet. The main point of our paper, however, focuses on why BYOL doesn\\u2019t collapse. Larger datasets evaluate the effectiveness of the algorithm, while using smaller one is sufficient to demonstrate whether the algorithm would collapse or not: if our algorithm performs better than the random baseline and close to the BYOL, then we are confident to claim it\\u2019s a non-collapsing framework. On the other hand, if the algorithm fails (by fail, we mean worse than the random baseline) on CIFAR10, then even if it works on ImageNet, we would still regard it as flawed since it heavily depends on the data distribution. As for evaluating RAFT on ImageNet, we are trying to gather more computational resources to validate our proposed method, although we wouldn\\u2019t view RAFT\\u2019s effectiveness as the all-important contribution of this work.\\n\\nBeing aware of the limitation brought by the dataset, we avoid claiming that RAFT is better than BYOL in terms of its performance on the linear evaluation protocol. The value of our work lies in the attempt of subsuming BYOL into the already-verified alignment-uniformity framework, and jumping out of the current understanding frameworks originally provided by BYOL under mild conditions, including Teacher-Student framework, DQN\\u2019s Online-Target framework and Mean Teacher in semi-supervised learning.\"}",
"{\"title\": \"Official Blind Review\", \"review\": \"*Summary*\\nThe paper provides a new perspective on the BYOL self-supervised learning method. First, the paper introduces an upper-bound objective, BYOL', that is easier to analyze than BYOL because it is composed of two well understood losses: an alignment loss and cross-model loss. Further, it shows empirically that optimizing BYOL' is similar to optimizing BYOL. Second, the paper introduces the RAFT method which maximizes the alignment loss instead of minimizing it. The paper proves that under some assumptions, such as a linear predictor function, optimizing BYOL' is equivalent to RAFT. Based on this analysis, the paper explains why the predictor function is essential for BYOL and why it is hard to achieve convergence.\\n\\n*Quality*\\nI really like the analysis of the paper. The paper provides a mix of theoretical and empirical argument for understanding BYOL, and introduces a new method called RAFT. The main drawback of the paper is that it limits the empirical analysis to a single and much simpler experimental setup using CIFAR10 and resnet18. I believe that since BYOL's significance is an empirical one and is mainly established on Imagenet, any empirical analysis of BYOL in other simpler settings is quite limited.\\n\\n*Clarity*\\nThe authors have done a very good job in writing this paper. The logic, presentation and results are quite clear to understand.\\n\\n*Originality*\\nI find the paper quite interesting and original in its analysis. I especially like the analysing BYOL through the BYOL' upper-bound.\\n\\n*Significance*\\nI think the results of the paper could have been quite more significant if applied on other experimental setups. While I understand working with SOTA models can be computationally expensive, the main argument of this line of work is empirical and it is hard to be convincing without more extensive empirical results.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Studying an interesting problem, however the main claim is erroneous\", \"review\": \"**Summary**: the paper aims to explain the success of BYOL, a recently proposed contrastive method that mysteriously avoids the trivial constant solution without requiring negative samples. The paper proposes a new loss named RAFT. Compared to BYOL, RAFT is more general since it subsumes a variation of BYOL as its special case, and contains a cross-model term to be maximized which regularizes the alignment loss and encourages the online encoder to \\\"run away\\\" from the mean teacher.\\n\\nThe paper claims this cross-model term encourages disparity, which could help demystify why BYOL does not collapse to a trivial solution. However, the cross-model term itself cannot prevent outputs from collapsing, as explained below.\\n\\n**Question 1**: my main concern is the effectiveness of the cross-model loss: I disagree that the cross-model loss prevents collapsing representations. I think the authors may be confusing contrasting two samples (\\\"cross-sample\\\") and contrasting two functions (\\\"cross-model\\\"):\\n- The cross-model loss is essentially the L2 distance between two functions, which is the average squared error between two model outputs on the *same sample.*\\n- The common contrastive loss, which contrasts outputs from the same model on *different samples*.\\n\\nFor example, suppose MT is a constant function at the $t_{th}$ iteration (i.e. the function outputs some constant $c$ for all input), then the online encoder could be updated to be another constant as far away from $c$ as possible, i.e. the cross-model loss is maximized, however we still have the sample collapsing issue. As a side note, a constant function also achieves a perfect alignment loss.\\nMore concretely, consider $f(x) = Wx +b$ where $W$ is initialized to be the all-0 matrix, i.e. $f(x) = b$ is a constant function. Then for all future updates, learning $f$ only updates $b$ but not $W$ (since there's no gradient on $W$), and therefore $f$ will remain a constant function, i.e. it always collapses the points. One may argue that it is wrong to choose $W = 0$, but the point is, the success of BYOL needs more careful analysis of the optimization process, which cannot be addressed by the cross-model loss term itself.\\n\\n**Question 2**: section 3 phrases the need of a predictor as a disadvantage of BYOL, however RAFT also requires a predictor head to achieve good classification performance. Studying the effect of the predictor is an interesting direction and will make the paper much stronger, as the authors also point out in the conclusion.\\n\\n**Other comments**:\\n- Table 3: why are there no results for BYOL'-MLP? Comparing RAFT-NP to BYOL'-NP, there doesn't seem to be a clear edge of RAFT, both in terms of the uniformity loss and the accuracy.\\nIt would also be better to highlight the key results in the table. The current table is quite dense; adding more highlights and comments will have the reader understand what to take away from these results.\\n- Paper organization: a lot of material is deferred to the appendix, which makes the paper a bit hard to follow since the reader needs to jump back and forth. It would be better if results in the appendix are better summarized in the main text.\\n- The term BYOL' first appears at the end of the first paragraph on page 2 without a definition.\\n- Minor typo: there's an extra left parentheses in front of the second $f$ in equation (5); an extra comma after \\\"distribution\\\" in the first paragraph of section 3.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"RAFT\", \"review\": \"This paper analyses the recently proposed Bootstrap Your Own Latent (BYOL) algorithm for self-supervised learning and image representation.\\nThe authors first derive an alternative training procedure called BYOL' by computing an upper bound of the BYOL objective function.\\nAfter diverse analyses, the authors then introduce Run Away From Your Teacher (RAFT), where RAFT is another BYOL variant that resembles contrastive method by having an attractive and repealing term in the training objective. According to the authors, this decomposition allows for a better understanding of the training dynamics.\\n\\nFinally, the authors made the following transitivity reasoning:\\n - BYOL and BYOL' are almost equivalent\\n - RAFT and BYOL' are shown to be equivalent under some assumptions.\\nThus, conclusions that are drawn from analyzing RAFT should still hold while analyzing BYOL. They thus link the interest of BYOL's predictor and the EMA through the RAFT loss decomposition. \\n\\nI have multiple strong concerns regarding this paper. These concerns are both on the paper results, shortcuts in the analysis, and the writing style.\", \"results\": [\"--------------\", \"In section 4, the authors introduce BYOL' as a variant of BYOL. To do so, they derive an upper bound on the BYOL loss, i.e. the L2 distance between the projection and the projector, and they try to minimize it. However, this approach disregards that BYOL does not minimize a loss (due to the stop gradient). In other words, the BYOL objective keeps evolving during training; the target distribution is non-stationary. As mentioned in the BYOL paper: \\\"Similar to GANs, where there is no loss that is jointly minimized w.r.t. both the discriminator and generator parameters; there is therefore no a priori reason why BYOL\\u2019s parameters would\", \"converge to a minimum of L_BYOL given the online and target parameters\\\". Minimizing an upper-bound is at best insufficient, at worst a non-sense. The sentences, \\\"minimizing L_{BYOL'} would yield similar performance as minimizing L_{BYOL}\\\" and \\\"we conclude that optimizing L_{BYOL'} is almost equivalent to L_{BYOL}\\\" are unfortunately wrong. This is somewhat highlighted different qualitative results in Appendix F.1.b != F.1.d.\", \"A better approach would be to ensure that the *gradients* go in a similar direction (so the training dynamics are similar rather than the objective function). However, even such a demonstration could be insufficient due to compounding factors in the training dynamics.\", \"The 1-1 mapping between BYOL' and RAFT rely on three hypotheses. While (i) and (ii) are reasonable, hypothesis (iii) is quite strong, and more importantly, neither elaborated nor discussed. In other words, I am unable to validate/invalidate the interest of the theoretical results. Would it be possible to measure the normal gradient empirically? To bound it?\", \"In section 3, i would recommend the author to mention that multiple components were also in the BYOL paper; especially when writing \\\"therefore, we conclude the predictor is essential to the collapse prevention of BYOL.\\\"\", \"Although I acknowledge that self-supervised learning requires heavy computational requirement, and few teams may run experiments on ImageNet. Yet, I would recommend the authors to not use CIFAR10 as the dataset has multiple known issues (few classes, small images, few discriminative features). Other variants such at STL or ImageNete can be trained on a single GPU over a day, and are less prone to misinterpretation in the results. Besides, I want to point out that BYOL was not correctly tuned: the experiments are based on a different optimizer (Adam vs LARS) and no cosine decay were used for the EMA, while these two components seem to be critical, as mentioned in BYOL and arxiv:2010.1024.\", \"Overall, I have a serious concern about the paper's core contributions. However, there are still some good elements in the paper that I think are under-exploited:\", \"RAFT is itself an original, new and interesting algorithm. The potential link to BYOL is indeed an interesting lead, but in its current state, I would make it a discussion more than a key contribution.\", \"Table D.3 shows that RAFT/BYOL' does not collapse without predictors when \\\\beta is high. Albeit providing low accuracy, a non-collapse is quite surprising. Unfortunately, the authors leave it for future work\"], \"shortcuts\": \"--------------\", \"i_was_surprised_by_multiple_shortcuts_in_the_reasoning_process_or_undiscussed_conclusions\": [\"The authors mention that the predictor is a dissatisfactory property of BYOL. Could they elaborate? This is actual the key component of the method (if not the only one!), and such pro/cons could be detailed in light of other methods.\", \"In section 4.1, the authors mention that: similar accuracies and losses are sufficient somewhat confirm that BYOL and BYOL' are similar. Two completely different methods may have the same errors while being radically different...\", \"In Section 4.2, the authors mention that \\\"Based on the form of BYOL, we conclude that MT is used to regularize the alignment loss\\\". However, there is no experiments to try to contradict/validate this claim. Differently, the EMA may ease the optimization process or it may have different properties. Even if I understand the logic behind this statement, I regret that the authors do not try to confort it.\", \"In section 4.2, the authors mention that there exist multiple works (while only citing one...) demonstrating that EMA is \\\"roughly\\\" equivalent to sample averaging and may encourage diversity. While this is sometimes true in specific settings (cf. markov game and fictitious play), this is also known to ease optimization (cf. target network in DQN). Stating that RAFT is better than BYOL because it better leverage the EMA target is tricky without proper analysis.\", \"Albeit understandable, the transitivity between BYOL and RAFT is difficult to defend due to multiple approximations and hypothesis. Therefore, it is of paramount importance that the approximations and hypothesis are validated, which is not sufficiently done in the paper.\"], \"writing\": [\"--------------\", \"Although papers' writing quality remain subjective, I tend to expect a formal language. I kind of feel ill-at-ease when reading sentences including \\\"BYOL works like a charm\\\", \\\"disclosing the mistery\\\", \\\"to go harsher\\\", \\\"bizarre phenomon\\\". Other sentences also expresses judgement such as \\\"inconsistent behavior\\\", \\\"dissatisfactory property of BYOL\\\" or \\\"has admirable property\\\" without proper argumentation.\", \"It is non-trivial to follow the different version of the algorithms... which are defined in the appendix. Please consider renaming BYOL'.\", \"A related work section would have been useful to put in perspective BYOL that are theoretically motivated e.g. AMDIM, InfoMin, other self-supervised learning methods without negative example, e.g. DeepCluster, SwAV. Section 2 is more about the background, not related work.\", \"there are a few confusions in the notation, \\\\alpha \\\\beta have different meaning across equations (Eq 7 vs 8)\", \"In section 3, random is ill-defined. In Cifar10, random should be 10%, I assume that you refer to random projection. Please clarify.\", \"Figure 1 is clear, and I recommend to keep it as it is.\", \"From my perspective, the mathematical explanation in Section 5 is quite obfuscated, and I would recommend a full rewriting.\", \"Please avoid unnecessary taxonomy, e.g. uniformity optimizer, effective regulizers and others.\", \"In conclusion, you mentioned some results about the projector. However, you never detail them in the paper. Please, do not discuss unpublished results.\", \"Overall, I had difficulties following the paper: I keep alternating between the appendix, previous sections, and the text. Again, the phrasing makes me ill-at-ease.\"], \"conclusion\": \"--------------------\\nI have some serious concerns about the core results of the paper. Importantly, Theorem 4.1 follows a misinterpretation of the BYOL training dynamics. From my perspective, there are too many unjustified claims, and I cannot recommend paper acceptance. However, there is some good idea in the paper, and I strongly encourage the authors to study RAFT independently of BYOL in the future.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review2\", \"review\": \"This paper mainly proposes an objective that incorporates the target network in BYOL in a opposite way that encourages the prediction of the online network to be far away from the target network.\", \"concerns\": \"1. It is overly claimed that \\\"we unravel the puzzle of how BYOL avoids representation collapse\\\". For example, the authors fail to capture some intrinsic properties of BYOL, such as the role of prediction head (MLP + BN). The theorem 5.1 can only deal with the prediction head that is linear. The authors could refer to the [1] for more insights.\\n\\n2. The experimental results do not support the claim that \\\"RAFT is a conceptually non-collapsing algorithm\\\". In table 1, for results equipped with q_{w}-MLP, RAPT with better acc (71.31) in fact does not have smaller uniform loss. Instead, its alignment loss is smaller (which brings the acc improvement). So, the RAFT does not actually always enlarge the uniformity as claimed.\\n \\n3. The implementation of BYOL in this paper regarding Cifar10 is not convincing. [2] also use the resnet18 as the encoder and it achieves the accuracy with 91+ in Cifar10. However, the reproduced result in this paper is only around 70. \\n\\n4. BYOL proves its own effectiveness in ImageNet. To make fair comparisons, the authors shall conduct experiments in the same dataset. Otherwise, the claim regarding to the BYOL might not be solid.\\n\\n5. It is acceptable that running away from the mean teacher increases the difficulty of alignment. But it is unclear to the reviewer why it can produce global uniformity in the representation space?\\n\\n[1] Tian, Yuandong, et al. \\\"Understanding Self-supervised Learning with Dual Deep Networks.\\\" arXiv preprint arXiv:2010.00578 (2020).\\n\\n[2] Ermolov, Aleksandr, et al. \\\"Whitening for Self-Supervised Representation Learning.\\\" arXiv preprint arXiv:2007.06346 (2020).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
YMsbeG6FqBU | The Advantage Regret-Matching Actor-Critic | [
"Audrunas Gruslys",
"Marc Lanctot",
"Remi Munos",
"Finbarr Timbers",
"Martin Schmid",
"Julien Perolet",
"Dustin Morrill",
"Vinicius Zambaldi",
"Jean-Baptiste Lespiau",
"John Schultz",
"Mohammad Gheshlaghi Azar",
"Michael Bowling",
"Karl Tuyls"
] | Regret minimization has played a key role in online learning, equilibrium computation in games, and reinforcement learning (RL). In this paper, we describe a general model-free RL method for no-regret learning based on repeated reconsideration of past behavior: Advantage Regret-Matching Actor-Critic (ARMAC). Rather than saving past state-action data, ARMAC saves a buffer of past policies, replaying through them to reconstruct hindsight assessments of past behavior. These retrospective value estimates are used to predict conditional advantages which, combined with regret matching, produces a new policy. In particular, ARMAC learns from sampled trajectories in a centralized training setting, without requiring the application of importance sampling commonly used in Monte Carlo counterfactual regret (CFR) minimization; hence, it does not suffer from excessive variance in large environments. In the single-agent setting, ARMAC shows an interesting form of exploration by keeping past policies intact. In the multiagent setting, ARMAC in self-play approaches Nash equilibria on some partially-observable zero-sum benchmarks. We provide exploitability estimates in the significantly larger game of betting-abstracted no-limit Texas Hold'em. | [
"Nash Equilibrium",
"Games",
"CFR"
] | Reject | https://openreview.net/pdf?id=YMsbeG6FqBU | https://openreview.net/forum?id=YMsbeG6FqBU | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Y5rTdf-LmtW",
"ji0Rswb2-a",
"rBnNU8mplr",
"34pQORTC-LD",
"cH_NUqTVNkv",
"hEW6ukPgosW",
"uO9OrSPvnrH",
"CNcFSEXA1dI"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610973849005,
1610040390585,
1606251570145,
1606243953388,
1606243453727,
1604284250153,
1603580308692,
1603060148609
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Paper3679/Authors"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3679/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3679/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3679/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3679/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3679/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3679/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"DREAM is a sound implementation of CFR\", \"comment\": \"We spoke to the original authors and realized that there was a misunderstanding on our part. We retract our claim of DREAM not being sound. Apologies to the authors and reviewers.\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper introduces a new algorithm to solve game, more or less similar (in the general idea, yet differences are interesting) than CFR. The concept is to sample from past policies to generate trajectories and update sequentially (via regret matching).\\n\\nThe three reviewers gave rather lukewarm reviews, with possible suggestions of improvements (that were more or less declined by the authors for those proposed by Rev3 and Rev4; the added material focuses more on the clarity of the text than on the content itself).\\n\\nI have also read the paper, and find it quite difficult to assess. At the end, it is not clear to a reader whether ARMAC is the new state of the art, or just a \\\"variant\\\" of CFR that will be soon forgotten. The performances do not seem astonishing (at least against NSFP) and even though DREAM might not be satisfactory to the authors (EDIT POST DISCUSSION: actually, DREAM is a valid competitor and must be included in the comparative study), it would have been nice to provide some comparison. Maybe the issue is the writing of the paper that could and should be improved so that it is clearer what are the different building blocks of ARMAC (and their respective importance).\\n\\nIf ARMAC is the new state of the art, then I am sure the authors will be able to clearly illustrate it in a forthcoming revision (maybe with more experiments, as suggested by Rev2). Unfortunately, for the moment, I do not think this paper is mature enough for ICLR.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thank you for the thorough review. We have applied significant modifications based on your review, which we describe below.\", \"textual_modifications\": [\"New or modified text since the previous copy is highlighted in blue. This will be changed to black in the final copy.\", \"We have added a diagram of a simple game to the Background and an example playing of that game to describe all the terms\", \"We have added a step-by-step walkthrough example of ARMAC in Appendix using this simple game.\", \"We moved the entire Theoretical Properties section to the appendix.\", \"We have added text to the intro to clarify CFR and a new paragraph clearly stating the problem statement.\"], \"point_1\": \"We have now added a sentence at the end of the first paragraph of the intro to explain the goal of CFR (but yes, also: each player minimizes their cumulative counterfactual regret; in self-play, this leads to approximate equilibria on average). \\n\\nWe have added a problem statement paragraph at the end of the intro which is self-contained, clear problem the paper addresses.\", \"point_2\": \"The benchmark games used in the first part of Section 4 are much smaller than Atari games but non-trivial since they require playing stochastic policies at equilibrium, standard RL algorithms for MDPs are not applicable. These are games that have commonly been used across this literature and are small enough that we can compute the exact distance to Nash equilibrium (this is the metric NashConv). Prisoner\\u2019s dilemma is a non-zero sum one-shot game. In this paper, we examine zero-sum extensive games, which are sequential (extended in time), hence the motivation of using RL.\\n\\nWe have added a step-by-step example walk-through of the algorithm using Kuhn poker, a very small extensive-form game that can be explained via a diagram. We do not include results on Kuhn poker in our results because it is too small and any principled RL algorithm solves it (finds a Nash equilibrium with high precision) almost immediately. We placed the worked out example in the appendix to verbosity, but we may as well move it into the main text upon request.\", \"point_3\": \"We have added Subsection 2.1 that defines Nash equilibria and the empirical metrics used to compute the convergence rates of algorithms in practice (NashConv, exploitability).\", \"point_4\": \"ARMAC accomplishes two things as far as variance reductions is concerned. Firstly, it defined a new quantity W that has a similar scale for all information states. This is unlike what vanilla counterfactual regrets are like, that become negligible small for states far away from the root node of the game tree due to low reach probabilities, that are equal to the product of all action probabilities from the start of the game up to that state. While this makes no difference in a tabular setting, this is very problematic for neural networks that are not good at expressing quantities of very different scale. DeepCFR, for instance, also solves this problem, but by renormalizing counterfactual regrets by a different constant.\\n\\nIn order to measure empirical variance, at first we have to define what we want to measure. What matters most, is the variance of gradients that a given optimizer has to deal with, and gradient values are proportional to L1 losses when L2 loss is minimized (this is because a 2x is a derivative of x^2). Also, while absolute scales of losses or gradients do not matter (as they are absorbed by the optimizer), only variability of their magnitudes matters, as the learning rate has to be able to handle the worst possible jumps. Thus, we defined the following quantity to measure: signal to noise ratio. Say we sample the batch of size B and we train the mean regret estimator for each valid action within that information state. We have A actions in total. For each batch entry and each action, we evaluate an appropriate L1 loss (gradient magnitude estimate) of the regret head. Then we calculate the following statistics across all batch entries, all actions and many training iterations: mean(L1) and mean(L1^2). We define var(L1) = mean(L1^2) - mean(L1)^2. We define sdev(L1) = sqrt(var(L1)). And finally, we define signal to noise ratio as S = mean(L1) / sdev(L1). The lower this ratio is, the higher the effective sample size within a given batch is, and the more efficient learning process will be.\\n\\nWe evaluated this quantity for on Leduc Poker and Liars Dice . ARMAC got 0.40 on Leduc Poker and 0.41 on liars dice.\\n\\nWe then computed the same metric for the regret network in MC-RCFR, obtaining 0.35 for Leduc and 0.18 for Liar\\u2019s Dice. Note that the drop compared to ARMAC is considerably larger on Liar\\u2019s dice, which is a larger game (~24 times larger than Leduc), with more variable length episodes (3-14 vs. 7-12).\"}",
"{\"title\": \"Our response to AnonReviewer4\", \"comment\": \"Thank you very much for your helpful review.\\n\\nAddressing point 1.\\nFrom the perspective of critic training, the problem is a pure RL and sample complexity will ultimately depend on the algorithm used. In this paper we used TB(lambda) (tree-Backup by Precup et al) algorithm, but any other off-policy policy evaluation algorithm can be used instead. We do not think that our choice of the algorithm was optimal but we chose it only for algorithmic simplicity. \\n\\nAddressing point 2.\\nCFR can also solve single player settings by substituting other players (player -i) reach probability with 1. A single player game is just a special case of a two player game where the second player does not make any move. CFR can certainly solve such games, as long as they have a finite state space. For the same reason ARMAC is also capable of solving single agent problems and we tested on a few ATARI domains as a sanity check.\\n\\nThe main contribution behind ARMAC is the realization that by storing policy pools (as opposed to a replay of experiences) and training a critic network one can accurately evaluate all necessary reach probabilities that are necessary to derive a neural version of CFR without using any importance weights.\\n\\nAlso, ARMAC is compatible with many exploration methods that can be applied in a single agent case as well. In depth exploration of different exploration strategies was not the main focus of the paper. By exploring exploratory strategies too deeply we may lose the main focus. The key thing we wanted to show in that section is that a bandit chooses to use mean regret policies freely over recent regret policies.\"}",
"{\"title\": \"Out response to AnonReviewer3\", \"comment\": \"Thank you very much for your helpful review.\", \"reference_to_dream_paper\": \"https://arxiv.org/pdf/2006.10410.pdf (DREAM: Deep Regret minimization with Advantage\\nbaselines and Model-free learning)\", \"we_have_not_compared_our_work_with_dream_experimentally_as_dream_is_not_an_asymptotically_consistent_implementation_of_cfr\": \"it does not preserve the convergence guarantees of MCCFR. DREAM would give biased results even in tabular settings. The key problem in their paper is the absence of all necessary adaptations when transitioning from DeepCFR (that uses external sampling), to DREAM (that uses outcome sampling). When an algorithm is executing external sampling (i.e. expanding all branches in player i decision nodes) and putting observations into reservoir memory, it is in expectation equivalent to sampling actions uniformly at player i\\u2019s decision point. This key here is that sampling policy for player i actions does not change across training epochs. However, when outcome sampling is used, sampling policy of the player i changes with training epochs and this requires much extra work to compensate for.\\n\\nDREAM algorithm does not store past policies neither for player i nor for player -i. Instead, the algorithm stores experiences in reservoir memory B (described in Section 4.5 for the case of DeepCFR and mentioned in section 5.2 in the case of DREAM implementation) where both sampling player policies are correlated. And, unlike in the case of DeepCFR using external sampling, empirical frequencies using DREAMS outcome sampling of a given information state visitations will no longer be independent of players i\\u2019s policy run at time t and will be proportional to the total reach of both players at that epoch. Section 5.2 of DREAM\\u2019s paper contains the following statement:\\n\\n\\u201cFurthermore, because from agent i\\u2019s perspective histories are sampled based on their information, the expectation for data added to B_d i in infostate s_i is proportional to r^t_i (s_i , a_i). \\u201c\\n\\nThe statement is correct only when a single epoch\\u2019s values are averaged over. However, once average regrets are calculated across more than one epoch by using data within the reservoir replay, each epoch's contribution towards average regrets of a given information state will be weighted by player\\u2019s i policy pi^t_{i}(s, a) as well as -i. \\n\\nVery similar problem applies when an average policy is calculated in section 5.3 . CFR calculates an average policy weighted by player\\u2019s i reach probability only. Such averaging only works when the policy of an opponent is not changing in time. Such calculation was not problematic for DeepCFR, as external sampling emulates a time independent uniform sampling policy. However, when outcome sampling is used, both policies within the replay memory get correlated. Technically, DREAM paper averages policy weighting it by the total reach and not only player\\u2019s i reach and thus is incompatible with CFR. This can be fixed by using appropriate importance weights, but this would lead to a lot of variance and has not been done in DREAM\\u2019s paper.\\n\\nIn our work we spent a lot of effort making sure that all those calculations are done correctly and that our algorithm would produce correct reach weightings for every quality we evaluate. The necessity of making both player policies independent in each training epoch, and thus allowing appropriate quantities to be eliminated was one of the reasons we chose to store all past policies and repeatedly resample them from game play. This can not be trivially done by only using a large (reservoir) replay memory and is one of the main contributions of ARMAC.\\n\\nFinally, note that we have run ARMAC on FCPA no-limit Hold\\u2019em, showing for the first time local best-response (LBR) bounds on exploitability, whereas DREAM showed results on a hand-constructed poker subgame. No other neural RL algorithm has shown decreasing exploitability of any form over training time in this game. Only DeepStack has reported similar metrics, and it used search and even more domain knowledge than DREAM + Deep CFR. We consider our poker result to be a significant achievement, and a testament to the scalability that we designed ARMAC around.\"}",
"{\"title\": \"ARMAC is model-free algorithm and improved from neural based CFR.\", \"review\": \"Review:\\nThis paper proposes a general model-free RL method for no-regret learning based on a repeated reconsideration of past behavior. The ARMAC algorithm using the off-policy policy evaluation algorithm TreeBackup to estimate value function and use regret matching to get the next joint policy.\\n\\nThis paper idea is origin from DCFR, DNCFR, single CFR. But those are model-based algorithms. ARMAC is model-free algorithms and it can be used in a more border environment. If the author compares this paper to the DREAM algorithm(Deep Regret minimization with Advantage baselines and Model-free learning) to state ARMAC advantage, I will update my score.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Good combination with gaming theory and actor-critic, but still having several concerns\", \"review\": \"In this paper, the authors adopt the idea from gaming theory to reinforcement learning and propose a new algorithm that uses the previous policy to update the current training without using importance sampling. Experiments show that the proposed algorithm cannot only work on the single-player setting but also work on the multi-agent (zero-sum) problems. However, I have the following concerns about the algorithms:\\n1) To train the critic, how would the sample complexity be? Like vanilla Actor-Critic algorithms, can it be replaced by doing a one-step TD(0) update on the critic to improve the sample efficiency?\\n2) Since the original CFR method is solving the multi-agent zero-sum algorithm, it would be interesting why this extension could solve the single-player problem?\\n\\nAlso, it would make the contribution of this work more clear if the author can compare this exploration method with other exploration methods, such as $\\\\epsilon$-greedy or UCB. To me, the algorithm uses sample trajectory $\\\\rho \\\\sim (\\\\mu_i, \\\\pi_{-i}^j)$. If we make all $\\\\mu$ be random policy, is this exploration similar to $\\\\epsilon$-greedy to some extent?\\n\\nConsidering all contributions and concerns mentioned above, I will suggest a borderline accept for this paper. I might change my score after the author's response and discussion.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Requires improved presentation and bit more experimentation\", \"review\": \"This paper considers the problem of counterfactual regret minimization and proposes an algorithm that does not use the importance sampling procedure. The claim is that this helps in reducing the variance usually introduced by the IS procedure. They propose a new algorithm that uses the previously used policies as a buffer and replays those policies to learn a new policy. The algorithm is also claimed to be highly scalable for games with large state-action pairs.\\n\\n\\nMy overall assessment of this paper is that, the problem considered is an important one and a well-implemented/well-written paper would definitely be a good paper that is significantly impactful for this community. However, my personal opinion is that this paper is not yet there. First, the paper is not well-written for someone who is not familiar with the exact prior literature on CFR using deep learning. In particular, the paper is not organized well, some of the important definitions are omitted and in general the key points aren't sufficiently highlighted. Also I find the experiments to be a bit lacking. In particular, one of the key claims of the paper is that it eliminates the variance introduced by the IS procedure, yet there are no experiments to substantiate this (more below). Also the executed experiments are not explained well. My opinion is that the section of \\\"theoretical properties\\\" is totally unhelpful for the main section and can be relegated to the appendix; it does not add value to the understanding of this algorithm. Additionally, I have several comments below that can help with both the writing, the weaker experiments and overall improving this paper to a state that will make it more impactful/easier to understand. My current assessment of this paper is that without a major revision of the writing and execution, this paper is not in a state to be accepted. \\n\\n\\n(1) A self-sufficient description of the problem statement. The paper does describe the problem, but it takes a few readings for a new reader not familiar with the exact line of work to get the setup and the contributions of this paper. Moreover, as an arrangement it is spread over introduction and notations. Finally, the paper doesn't explicitly state what the goal of a CFR algorithm is. Is it to minimize the total average counterfactual regret?\\n\\n(2) Along those lines, I think the paper would be very informative if the algorithm was evaluated and instantiated on a very simple two-player game with known Nash equilibria (even simpler than Atari and Montezuma's revenge). In particular, there are many such games known in theory (e.g., the prisoner's dillema) and you can pick one of them and instantiate/evaluate this algorithm to help the reader understand the notations precisely. My opinion is that the notations is intertwined with informal descriptions and it is often hard to parse what the authors are meaning to say. Having such a unified simple game will make this process easier on the reader. This also ensures that the implementation and the algorithm is itself correct.\\n\\n(3) Some definitions in the empirical section are not defined formally. In particular, the quantity NashConv is not defined in this paper and relegated to a related work. Only the informal description is given. This makes the reading really hard. For instance, we know that it should be close to 0, but how close? For instance, how do I interpret a value of 0.5? What is the range of NashConv? These questions could be easily answered if a precise definition of NashConv was included in this paper.\\n\\n(4) Issues with experiments: First, in the convergence plot for ARMAC in Figure 3, it would be good to plot the line for what the NashConv is (like in Figure 2). Second, the main claim of this paper is that the variance introduced by importance sampling is not present (because this algorithm does not invoke the IS procedure). However, it is surprising that the empirical evaluation section does not include a convincing experiment to drive this point. In particular, it would be good to have a comparison of the variance of the three algorithms and show that the variance introduced by the IS procedure is indeed a meaningful problem for the final outcome (i.e., variance in the convergence value to the NashConv) and that the proposed method indeed reduces it? If the variance introduced by the IS procedure does not lead to a meaningful variance on NashConv, please motivate better why the variance introduced by IS is a problem?\\n\\n(5) Some suggestions on plots: I find Figure 1 to have particularly difficult to read color schemes. First, the caption states a Pink line and it took me a while to figure which was Pink (I think it is the violet line which is pink). The same problem with Brown. Overall, the 7 colors used are pretty close and I would suggest either trying to use other markers to distinguish (such as a text on the line, different line type etc) or try and use contrasting color schemes. For instance, some of these lines may be indistinguishable for a person with color blindness. Likewise, the titles of the plots are not meaningful/interpretable. In figure 1, both the x and y axis seem like titles that were used internally by the authors. I do not understand what they mean. In the x-axis title, I don't see what that 1 represents and I quite frankly do not understand what the y-axis is trying to say. Likewise, among the three plots, across Figures 1, 2, 3 if the order of the three environments remained consistent, that would help the readability. This is also important, because I still do not understand how to interpret Figure 1. I think 0 implies that the algorithm is good. But if its negative does it mean that the algorithm is better while if its positive it is worse? Please explain Figure 1 better.\\n\\n(6) Finally there are many typos throughout the paper. Here is a non-exhaustive list.\\n- page 3: heads on the same neural -> I don't understand what this statement is saying and seems grammatically incorrect as stated.\\n- Page 3: The first one estimate -> The first one estimates\\n- In caption of Figure 1, the word modulations is used. This word is not defined anywhere else in the paper. Please either define this or re-use a word that has been introduced.\\n- Page 6: ARMAC generates experiences using those... -> this sentence is grammatically incorrect. Please fix.\\n- Page 8: Conclusions. \\\"It is brings back\\\" -> grammar check this sentence\\n- Page 8: Conclusions: \\\"for convergence one of the classes\\\" -> grammar check\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
pbUcKxmiM54 | Simple deductive reasoning tests and numerical data sets for exposing limitation of today's deep neural networks | [
"Kalidas Yeturu",
"Manish Kumar Srivastava"
] | Learning for Deductive Reasoning is an open problem in the machine learning world today.
Deductive reasoning involves storing facts in memory and generation of newer facts over time.
The concept of memory, processor and code in deduction systems is fundamentally different from the purpose and formulation of weights in a deep neural network.
A majority of the machine learning models are inductive reasoning models including state of the art deep neural networks which are effectively tensor interpolation based models.
A step towards realization of memory is through recurrent neural networks and its variants, however the formal representation is not sufficient enough to capture a complex mapping function between input and output patterns.
Deep neural networks are positioned to do away with feature engineering which is essentially deductive reasoning methodology.
There are existing works in deductive reasoning in neural networks that require learning of syntax, unification and deduction and operate on text data as sequence of tokens.
However the performance of deductive reasoning networks is far from perfection which may be either due to syntax or deduction aspects.
In this context, we have proposed a suite of completely numeric data sets which do not require parsing as with text data.
The 10 data sets are for - (a) selection (3 data sets) - minimum, maximum and top 2nd element in an array of numbers; (b) matching (3 data sets) - duplicate detection, counting and histogram learning; (c) divisibility tests (2 data sets) - divisibility of two numbers and divisibility by 3; (d) representation (2 data sets) - binary representation and parity.
Though extremely simple in terms of feature engineering, in all of these tests, simple deep neural networks, random forest and recurrent neural networks have failed with very low accuracies.
We propose these as numerical test-bed for testing learning models for deductive reasoning. | [
"inductive reasoning",
"deductive reasoning",
"neural network",
"memory",
"feature engineering"
] | Reject | https://openreview.net/pdf?id=pbUcKxmiM54 | https://openreview.net/forum?id=pbUcKxmiM54 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"_ZUDqplYWxp",
"Wt7FGgyoXSq",
"h-8KlJ641Bq",
"PC38LFQ_z71",
"RZ1vM3gQnyO",
"rNdfLvwfBmR",
"p-N7Z2BZFgK"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040363973,
1606263260121,
1606263208982,
1603888317224,
1603866313760,
1603814937939,
1602620448283
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3676/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3676/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3676/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3676/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3676/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3676/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper is not suitable for publication at ICLR. The paper contains a useful message, that neural networks are not a silver bullet, and are especially not well suited to deductive problems. However, as several reviewers pointed out, the claims of the paper are undermined by the fact that it ignores a lot of relevant work on using neural networks in the context of logic reasoning. Reviewer 2 provides a particularly useful list of relevant works on the topic.\"}",
"{\"title\": \"We have updated the manuscript keeping in mind what the reviewers suggested.\", \"comment\": \"Reviewer Comment: I don't necessarily see the problem with small feature engineering. Almost every neural approach that is proposed has some engineering - architecture, hyperparameters, data augmentations, etc. that benefit from knowledge about the task or data.\", \"response\": \"Thanks to the reviewer for valuable inputs.\", \"reviewer_comment\": \"Interesting analysis, but lacking details and unclear motivation\"}",
"{\"title\": \"We have updated the manuscript keeping in mind what the reviewers suggested.\", \"comment\": \"Reviewer Comment: The paper is hard to follow at places. The main contributions are the algorithms 1-5.\\n\\u201cThe listings of the algorithms seem quite redundant considering the simple types of datasets one wishes to generate when this would often be achievable using mathematical formula or code \\\"one-liner\\\"\", \"response\": \"Thanks to the reviewer for valuable inputs.\", \"reviewer_comment\": \"Simple datasets that are hard to model using neural networks without feature engineering\"}",
"{\"title\": \"Simple datasets that are hard to model using neural networks without feature engineering\", \"review\": \"This paper studies the limitations of deep neural networks to model deduction based inferences. This is done by crafting simple datasets and experimentally showing that some (details are not provided) RF, NN and RNN models fail on these.\\n\\nThe paper is hard to follow at places. The main contribution seems to be Algorithms 1-5, which can be used to generate 10 different dataset \\\"benchmarks\\\". The listings of the algorithms seem quite redundant considering the simple types of datasets one wishes to generate, when this would often be achievable using mathematical formula or code \\\"one-liner\\\" (the algorithms are also missing information what is returned and the fonts are used inconsistently). The experimental evaluation gives no details of the trained models.\\n\\nI agree with the authors, that feature engineering is very relevant when it comes to using ML models and in recent years there has been some tendency to consider neural networks as simple plug-in solutions to all scenarios. However, it seems hardly surprising that the crafted benchmarks proposed here are difficult or even impossible to learn for random forest or neural networks. I might be missing something crucial here, but the paper's contribution seems not really warrant publication.\", \"pros\": \"Raising awareness that deep learning is not a plug-in solution for every occasion\", \"cons\": \"Significance and novelty seem questionable\", \"questions\": \"Please address and clarify the con above\", \"minor\": \"This structure is repeated in several places and is hard to parse, consider clarifying it:\\n\\\" - (a) selection (3 data sets) - minimum, maximum and top 2nd element in an array of numbers; (b) matching (3 data sets) - duplicate detection, counting and histogram learning; (c) divisibility tests (2 data sets) - divisability of two numbers and divisability by 3; (d) representation (2 data sets) - binary representation and parity.\\\"\\n\\nRegarding the statement \\\"However to the best of our knowledge and exploration, today there is no RNN formulation which is meant to learn facts, unification and deductive inferences.\\\", have the authors checked the recent approaches to use deep learning to learn to solve combinatorial problems (e.g., SAT, CSPs) and using GNNs. This is a currently very active area of research that might be interesting to the authors, see e.g.,\", \"https\": \"//openreview.net/forum?id=HJMC_iA5tm\\nfor some recent examples.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting analysis, but lacking details and unclear motivation\", \"review\": \"*Summary:*\\nThe paper argues that deductive reasoning is an open problem in current machine learning scenarios where features are learned rather than hand-crafted. To highlight the limitations of current approaches, the paper proposes a benchmark suite of 10 simple tasks (finding the minimum, divisibility test, etc.) that are trivial with some feature engineering, but are shown to be very hard without it. Experiments are performed with random forests, neural networks (MLP?), and recurrent neural networks.\\n\\n*Strengths:*\\n1. Important to highlight limitations of current neural network based methods. The proposed tasks are very simple for humans, but are discrete and deductive, rather than the inductive setups NNs typically work with.\\n2. Experiments (although quite limited) show that recent ML approaches exhibit performance close to random.\\n\\n*Weaknesses:*\\n1. While I agree that highlighting the drawbacks of current inductive ML approaches is important and that the proposed tasks are hard to do, I don't necessarily see the problem with small feature engineering. Almost every neural approach that is proposed has some engineering - architecture, hyperparameters, data augmentations, etc. that benefit from knowledge about the task or data. For example, CNNs trained on ImageNet use a lot of knowledge: convolutions better than standard linear layers; random crop of the image during train, 5 crops + flips at test time to further improve spatial understanding; image rotation or intensity variation as data augmentation strategies, etc.\\n2. The paper has a lot of space (is only 6 pages), but does very little to explain the models. No details about the RF, NN, or RNN are mentioned. Is the NN an MLP? How many layers? What are the hidden sizes? How is the RNN used? How is the output produced, last time step hidden state? What is hidden size dimensionality? Details like this matter, and performance metrics without them do not say much.\\n3. Since the paper takes the stand that feature engineering is key, it would be nice to show improved results with little feature mapping. While it seems that most tasks should be solvable, it is nice to prove that nevertheless. For example, what feature engineering strategy should be used for finding the maximum (when feature engineering already solves the task)? Or would it be enough to represent the real number as a binary sequence for the parity problem?\\n\\n*Overall rating:*\\nWhile the premise is interesting, the work needs to be developed further and presented in much more detail than the current state. In addition, I would like to see some discussion on how some of these deductive reasoning tasks are required as part of an overall intelligent system, rather than just a set of tasks specifically built to break NNs.\\n\\n*Post-rebuttal*\\nAll reviewers agree that this paper is not up to the mark. While the revision does include several additional related works, they are not very well integrated with the rest of the discussion on the paper. For example, how would some of these memory networks perform? How would Neural Turing Machine do? Considering this, I am hesitant to improve my rating for the paper, even if the collection of related works will certainly help in the re-submission.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Great work on developing the deductive reasoning test sets but ignored existing state-of-the-art models and efforts\", \"review\": \"This paper's contribution is introducing a set of tasks and datasets that require deductive approaches as opposed to common induction-based models. The paper tackles an important and interesting problem that helps to shape the future of the neuro-symbolic research area. My main concern however is, the paper ignores and does not cover the current state-of-the-art techniques and their corresponding datasets and by just introducing some datasets fail to give a correct image of the current efforts in this area. For example, the variation of Neural Turing Machine and Memory Networks has been successfully applied to the sorting problem (which has been proposed as one of the tasks of interest in deductive reasoning in this paper as well) [1], however, the authors have not discussed these class of networks at all. In fact, the authors mention the gap in the current models by talking about the need for models that can store the facts and the intermediate results for being able to conduct deductive reasoning but do not talk about the role and shortcomings of Memory Networks and Neural Turing based models or Neural\\nStacks/Queues. Similarly, there are no arguments in the paper about why Neural Theorem Provers [2] cannot be used to emulate the deductive inference mechanism. \\nIn summary, the authors have initiated a good step toward defining the simple deductive reasoning tasks; However, the work has not placed well on the body of current neural and neuro-symbolic techniques, tasks, and datasets and therefore the contribution is not enough for the publication in ICLR.\", \"minor_comments\": [\"3rd sentence of the introduction needs rewriting.\", \"Section 2.2: of of ---> of\", \"Results: 2^5 0 ---> 2^50\", \"1) Vinyals, Oriol, Samy Bengio, and Manjunath Kudlur. \\\"Order matters: Sequence to sequence for sets.\\\" arXiv preprint arXiv:1511.06391 (2015).\", \"2) Rockt\\u00e4schel, Tim, and Sebastian Riedel. \\\"Learning knowledge base inference with neural theorem provers.\\\" Proceedings of the 5th Workshop on Automated Knowledge Base Construction. 2016.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Major gaps in related work\", \"review\": \"The paper \\\"Simple deductive reasoning tests and data sets for exposing limitation of today's deep neural networks\\\" describes several datasets for deep learning to test deductive reasoning abilities of neural networks. The paper tests several neural network architectures (as well as random forests) on these datasets and concludes that neural networks are generally not able to perform deductive reasoning.\\n\\nMy main critique is that the authors do not seem to be aware of any of the research that is going on in the area. The opening statement of the paper is as follows: \\\"Learning for Deductive Reasoning is an open problem not yet explicitly called out in the machine learning world today.\\\" I'm afraid such a statement is simply not true. Reasoning (including deductive reasoning) is a very active research area in the machine learning communities. See below for a very partial list of works.\\n\\nSecond, the paper contains many assertions that neural networks are incapable of reasoning. For example: \\\"The deep neural networks with today\\u2019s notion of a neuron are not suitable for deductive reasoning\\\" The main reason the authors give for this claim is that \\\"neurons\\\" perform only simple arithmetic operations. However, consider that computers only perform simple boolean operations and yet can perform the tasks described in this paper.\\n\\nThese claims are also contradictory to some findings in the literature, which the authors do not seem to be familiar with. Here is a bunch related work that the authors might want to take a look at. (And there are plenty more papers in the area.)\", \"datasets_with_similar_objectives\": \"Nikita Nangia and Samuel R Bowman. Listops: A diagnostic dataset for latent tree learning. arXiv preprint arXiv:1804.06028, 2018.\\n\\n \\\"ANALYSING MATHEMATICAL REASONING ABILITIES OF NEURAL MODELS\\\", by Saxton, Grefenstette, Hill, Kohli, ICLR 2019\\n\\n \\\"INT: An Inequality Benchmark for Evaluating Generalization in Theorem Proving\\\" Wu, Jian, Ba, Grosse, https://arxiv.org/abs/2007.02924\", \"datasets_and_theorem_proving_with_neural_networks\": \"\\\"DeepMath - Deep Sequence Models for Premise Selection\\\" Alemi et al. https://arxiv.org/pdf/1606.04442.pdf\\n\\n \\\"Learning to Prove with Tactics\\\" Gauthier et al. 2018\\n\\n \\\"HOList: An Environment for Machine Learning of Higher-Order Theorem Proving\\\", Bansal et al, ICML 2019\\n\\n \\\"Learning to Prove Theorems via Interacting with Proof Assistants\\\" Yang, Deng, ICML 2019\\n\\n \\\"GamePad: A Learning Environment for Theorem Proving\\\", Huang, Dhariwal, Song, Sutskever, ICLR 2018\\n\\n \\\"Generative Language Modeling for Automated Theorem Proving\\\", Polu, Sutskever, arxiv 2020\\n\\n \\\"Can Neural Networks Learn Symbolic Rewriting?\\\" Piotrowski et al., 2019\\n\\nNeural network architectures for reasoning, including (Tree)RNNs, GNNs:\\n\\n \\\"Can Neural Networks Understand Logical Entailment?\\\", Evans et al. 2018 https://arxiv.org/abs/1802.08535\\n\\n \\\"Graph Representations for Higher-Order Logic and Theorem Proving\\\" Paliwal et al, AAAI 2021\\n\\n Also, plenty of pre-deep learning work by Joseph Urban on how to turn logical formulas into features.\\n\\nRecently, Transformers have been shown to be good at logical reasoning:\\n\\n \\\"Deep Learning for Symbolic Mathematics\\\", Lample and Charton, ICML 2020.\\n\\n \\\"Transformers Generalize to the Semantics of Logics\\\", Hahn et al, 2020. https://arxiv.org/abs/2003.04218\\n\\n \\\"Mathematical Reasoning via Self-supervised Skip-tree Training\\\", Rabe et al, 2020, https://arxiv.org/abs/2006.04757\\n\\n\\nMy third point is that the paper does not specify the experiments precisely. What are the hyperparameters of the neural networks?\\n\\nFourth, the paper claims to consider \\\"today's deep neural networks\\\" but does not consider modern neural architectures, such as GNNs and Transformers. These have been shown much better reasoning abilities than RNNs.\", \"in_summary\": \"The paper addresses an important question and I encourage the authors to continue to follow this path. But this work does not consider the existing literature at all and a does not make significant contributions beyond the state-of-the-art as far as I can see.\", \"minor_comments\": \"\\\"A majority of the machine learning models are inductive reasoning models\\\"\\n\\nI believe by \\\"inductive reasoning\\\" the authors here refer to the learning process. I think the learning phase has to be contrasted with the inference phase.\\n\\n\\\"However for the sake of convenience and interpretation, a vector is typically represented as a tensor\\\"\\n\\nThe notion of tensor is a generalization of vector.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
L3iGqaCTWS9 | Hybrid and Non-Uniform DNN quantization methods using Retro Synthesis data for efficient inference | [
"TEJPRATAP GVSL",
"Raja Kumar",
"Pradeep NS"
] | Existing post-training quantization methods attempt to compensate for the quantization loss by determining the quantized weights and activation ranges with the help of training data. Quantization aware training methods, on the other hand, achieve accuracy near to FP32 models by training the quantized model which consume more time. Both these methods are not effective for privacy constraint applications as they are tightly coupled with training data. In contrast, this paper proposes a data-independent post-training quantization scheme that eliminates the need for training data. This is achieved by generating a faux dataset hereafter called as $\textit{‘Retro-Synthesis Data’}$ from the FP32 model layer statistics and further using it for quantization. This approach outperformed state-of-the-art methods including, but not limited to, ZeroQ and DFQ on models with and without batch-normalization layers for 8, 6 and 4 bit precisions. We also introduced two futuristic variants of post-training quantization methods namely $\textit{‘Hybrid-Quantization’}$ and $\textit{‘Non-Uniform Quantization’}$. The Hybrid-Quantization scheme determines the sensitivity of each layer for per-tensor and per-channel quantization, and thereby generates hybrid quantized models that are $10 - 20\%$ efficient in inference time while achieving same or better accuracy as compared to per-channel quantization. Also this method outperformed FP32 accuracy when applied for models such as ResNet-18, and ResNet-50 onImageNet dataset. In the proposed Non-Uniform quantization scheme, the weights are grouped into different clusters and these clusters are assigned with a varied number of quantization steps depending on the number of weights and their ranges in respective cluster. This method resulted in an accuracy improvement of $1\%$ against state-of-the-art quantization methods on ImageNet dataset. | [
"quantization",
"dnn inference",
"data free quantization",
"synthetic data",
"model compression"
] | Reject | https://openreview.net/pdf?id=L3iGqaCTWS9 | https://openreview.net/forum?id=L3iGqaCTWS9 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Rw3kAgYUpuv",
"_6sDODKv6ts",
"Q9IZZx4duwo",
"T-b3eILkx8Z",
"tqZvRSn_ba",
"wpm2uS6GvL-",
"zQuwp6oFtlK",
"uc0Xxbopa1k",
"bsRLt_n0VWE",
"RZFnmHmlFLO",
"qD0YFASBjg7",
"8-8sXqDjdHD",
"3QctDb4V5FS"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040483030,
1605712132515,
1605711943764,
1605706525368,
1605705747232,
1605685405155,
1605684504445,
1605646747743,
1605643487877,
1603943109541,
1603868615927,
1603815489364,
1603623223597
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3674/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3674/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3674/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3674/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3674/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3674/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3674/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3674/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3674/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3674/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3674/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3674/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"Four knowledgeable referees reviewed this paper; one reviewer (weakly) supports accept and other three indicate reject. Even with the rebuttal, all negative reviewers have concerns on the limited novelty and marginal performance improvement, and agree that the paper is not well qualified\\u00a0for the high standard of ICLR.\"}",
"{\"title\": \"Response to AnonReviewer4 [continue]\", \"comment\": \"Following are our responses to respective comments:\\n\\n.1) Here in the retrosynthesis data generation method we have introduced an extra class loss component. The calculation of class loss (step e in algo1) involves the number of classes, so the time to generate data will depend on the number of classes and hence the retro-synthesis data generation takes more time than distilled data. The 10-12 secs of time taken is to generate the data for the resnet50 model trained on Imagenet dataset. The end to end quantization of resnet50 using hybrid quantization method takes ~20secs, 12sec to generate data and 7-8secs for sensitivity estimation for both per-channel and per-tensor quantization on a GTX1080Ti system (more explanation for sensitivity estimation timing in next point)\\n\\n.2) As seen in eq2 of zeroq [3], it is calculating the KLD thrice for sensitivity estimation for k=[2,4,8]. However in our method we are doing only twice, for per-tensor and per-channel, hence there is no timing O/H as compared to ZeroQ. Regarding conflict of interest for optimizing the inference time, determining the configuration of each layer for per-channel or per-tensor is done beforehand and not during runtime, hence there is no timing O/H created during inference due to sensitivity calculation.\\n\\n.3) For our experiments we have determined the value of Th from the sensitivity plot (fig 4 of the updated rebuttal version) from which it is becoming conclusive. So, to answer: Is there any way to find an optimal value? We have not formed any optimization problem as such to solve for Th. In 8-bit settings a significant difference can be observed between per-channel and per-channel accuracy as mentioned in Table-4. \\n\\n.4) We agree that the improvement shown in Table-3 for the Cifar10 dataset is not very significant whereas significant improvement can be observed from results of Table-1. In our humble opinion it is harder to achieve even a marginal improvement over an strong baseline of ZeroQ, which is already close to fp32 accuracy. \\n\\nThe mentioned time complexity of 10-12sec for retro-synthesis data generation is for ImageNet dataset, whereas for Cifar-10 dataset it is only 2-3sec as there are just 10 classes. \\n\\n.5) & 6) In the current scope of this paper we have analyzed the proposed hybrid quantization and non-uniform quantization schemes only for 8-bit precision. We will definitely perform lower precision analysis in the future work. Table-4 and Table-5 are for 8bit precision. We have added the detail in the revised version. Thanks to the reviewer for pointing it out\\n\\nReferences\\n\\n[1] Raghuraman Krishnamoorthi. Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342, 2018\\n\\n[2] Markus Nagel, Mart van Baalen, Tijmen Blankevoort, and Max Welling. Data-free quantization through weight equalization and bias correction. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1325\\u20131334, 2019\\n\\n[3] Yaohui Cai, Zhewei Yao, Zhen Dong, Amir Gholami, Michael W Mahoney, and Kurt Keutzer. Zeroq: A novel zero shot quantization framework. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13169\\u201313178, 2020.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"Thanks to the reviewer for the detailed comment and feedback\\n\\nBefore Addressing each of the comments, we would like to provide more details about the proposed methodology\\n\\n.1) Keeping in mind the concerns about timings for quantizing the model, we would like to clarify that all the three proposed methods namely retro-synthesis data, hybrid quantization and non-uniform quantization are offline quantization schemes and hence the data generation process, sensitivity analysis etc. are done before deployment of the model and not during runtime.\\n\\n.2)The proposed hybrid quantization method is not an extension/alternative of ZeroQ\\u2019s mixed precision method. Previous works have shown [1], [2] that a fully per-tensor quantization leads to accuracy loss whereas a fully per-channel quantization is not hardware friendly, because of a different zero-point and scale values are associated per each channel. Hence we proposed a hybrid quantization method that uses a combination of per-tensor and per-channel schemes for different layers for achieving accuracy and performance improvement. Though we have used a similar method as that of ZeroQ for sensitivity calculation our goal is very different. So comparing the proposed hybrid-quantization scheme and the ZeroQ\\u2019s mixed precision scheme would be of less relevance.\"}",
"{\"title\": \"Response to AnonReviewer3 [continue]\", \"comment\": \"Comment and Questions:\\n\\n1) Thanks for the valuable suggestion. The benefits of resorting on the proposed \\\"Retro-Synthesis Data\\\" is shown in Fig.5 of the uploaded rebuttal version. From the sensitivity plot in Fig. 5 it is evident that there is a clear match between the layer sensitivity index plots of the proposed retro-synthesis data (red-plot) and the ground truth data (green plot) whereas huge deviation is observed in case of random data (blue plot). Hence it can be concluded that the proposed retro-synthesis data generation scheme can generate data with similar characteristics as that of ground truth data and is more effective as compared to random data.\\n\\n2) The Gaussian Loss (L_G) component makes sure that the generated data is normally distributed. $\\\\mu_0'$ and $\\\\sigma_0'$ are the mean and standard deviation of the current batch of input images respectively. Gaussian loss (L_G) tries to keep $\\\\mu_0'$` and $\\\\sigma_0'$ close to 0 and 1 respectively to assure that the generated retro-synthesis data is normally distributed.\\n\\n3) Non uniform quantization is proposed for weight quantization because it optimizes to reduce the quantization error. The obvious choice that comes into picture for non-uniform quantization is K-means. However, one of the key features of CNN is the large numbers of weights and these weights ranges are not very wide and also these have similar values. Hence for K-means it will take a lot of time to reconstruct a codebook as distance calculation for all weights is not efficient. Hence we have chosen the proposed non-uniform quantization method in contrast to the K-means.\\n\\nPlease let us know for any further queries.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thanks to the reviewer for the valuable feedback and suggestions. Please find our responses below for respective queries.\\n\\n[Query:] As far as I understood, the overall approach of this paper follows the ZeroQ framework, e.g., uses the generated distilled data in the same way, also computes the effect of quantization based on KL divergence, and allocates per-layer/channel bit precision using the same Pareto frontier approach\\n\\n[Author\\u2019s Response:] The proposed Hybrid Quantization uses the KLD method for sensitivity estimation similar to ZeroQ, but with altogether a different goal to determine whether a layer can be quantized using a per-tensor or per-channel quantization scheme. Whereas ZeroQ uses it for accurately estimating the required bit-width in their mixed precision scheme. Apart from accuracy improvement Hybrid Quantization also addresses a very practical aspect of DNN model deployment on edge/mobile devices which is the inference time. We have shown an inference time improvement of ~10-20% by benchmarking the models on Samsung Galaxy S20 mobile platform with Qualcomm Hexagon DSP hardware. This aspect of performance improvement is not discussed by ZeroQ. Also, as per our knowledge, a combination of per-channel and per-tensor quantization has not been explored for accuracy or performance improvement in existing literature\\n\\n[Query:] The proposed \\\"Retro Synthesis\\\" is the main innovation, but is rather poorly explained (what's the motivation for having three components L_BN, L_G, L_C, to the total loss, and are they really weighted equally in the experiments?) and lacks insight into why it works\\n\\n[Author\\u2019s Response:] How effectively the proposed \\\"Retro-Synthesis Data\\\" can represent the desired data classes as compared to random data is depicted in Fig.1 of the paper. Also, When the actual training/testing data is not available, instead of using the random data the benefits of resorting on the proposed \\\"Retro-Synthesis Data\\\" is shown in Fig.5 of the uploaded rebuttal version. From the sensitivity plot in Fig. 5 it is evident that there is a clear match between the layer sensitivity index plots of the proposed retro-synthesis data (red-plot) and the ground truth data (green plot) whereas huge deviation is observed in case of random data (blue plot). Hence it can be concluded that the proposed retro-synthesis data generation scheme can generate data with similar characteristics as that of ground truth data and is more effective as compared to random data\", \"l_bn\": \"the aggregated loss between, stored actual batch norm statistics of the model and the intermediate activation statistics computed using generated retro-synthesis data for every iteration.\", \"l_g\": \"Loss between generated retro-synthesis data distribution and the gaussian distribution. This loss component makes sure that the generated retro-synthesis data is normally distributed.\", \"l_c\": \"Loss between the softmax output of the forward pass and target vector for class C. This loss is introduced to ensure the generated retro-synthesis data for an image class have the same statistics matching to the corresponding class of the original or actual dataset.\\nYes, All the three losses are equally weighted\\n\\n[Query] : Since the three proposed improvements focus on different aspects of ZeroQ, ablations that analyze the effect of each in isolation (i.e., baseline ZeroQ + proposed hybrid quantization, baseline ZeroQ + proposed non-uniform quantization) are crucial for evaluating their contributions.\\n\\n[Author\\u2019s Response] : Table-1 of the paper compares baseline ZeroQ method with our proposed retro-synthesis data baseline method. From the results of Table-1, it is evident that our baseline method outperforms ZeroQ baseline results. Hence it is clear that the retro-synthesis data generation method is more accurate than the distilled data in ZeroQ. So, for the subsequent experiments to showcase the effectiveness of the proposed Hybrid Quantization and non-uniform quantization methods we chose to compare them against the proposed baseline method rather than ZeroQ baseline method. However we can add the comparison results of ZeroQ baseline+hybrid method and ZeroQ baseline + non uniform in the final version of the paper.\\n\\n[Query]: Details such as how the pre-trained models were obtained (were they trained from scratch, or pre-trained and obtained from the original ZeroQ authors) are not provided, which can significantly impact the empirical results and how they can be understood.\\n\\n[Author\\u2019s Response] We have used pre-trained models from PytorchCV (https://pypi.org/project/pytorchcv/) which are the same as what ZeroQ has used. We will add this detail in the final version of the paper.\"}",
"{\"title\": \"Response for AnonReviewer1 [Continue]\", \"comment\": \"references\\n \\n[1] Haozhi Qi, Chong You, Xiaolong Wang, Yi Ma, and Jitendra Malik. Deep isometric learning for visual recognition. arXiv preprint arXiv:2006.16992, 2020\\n\\n[2] Sun, Y., Wang, X., Liu, Z., Miller, J., Efros, A. A., and Hardt, M. Test-time training for out-of-distribution generalization. arXiv, 2019\\n\\n[3] Chen, T., Kornblith, S., Norouzi, M., and Hinton, G. A simple framework for contrastive learning of visual representations. arXiv, 2020\\n\\n[4] Bai, S., Koltun, V., and Kolter, J. Z. Multiscale deep equilibrium models. arXiv, 2020\"}",
"{\"title\": \"Response for AnonReviewer1\", \"comment\": \"Thanks to the reviewer for the valuable feedback and suggestions. Please find our responses below for respective queries\\n\\n\\u2022 We agree with you that, synthesizing the test data may not be needed in general, but when we consider scenarios where the hardware is customized for quantized models, such as cloud based deep learning inference providers or smartphone providers there is obvious necessity to provide a generic quantization service (FP32 models can be converted to lower precision), without needing the data or receiving data from their customers to fine-tune the models. Else they become strictly dependent on the model developers to share their data or should expect the model developers to do quantization. Both of these cases add an additional overhead to the entire process. Even for model developers having the capability to quantize their own models without the need to generate actual data results in quicker validation of the models and saves significant time.\\nAlso, in case of data privacy applications which involve customer sensitive information such as medical records, credit card details, security numbers etc. since it is very difficult to get access to necessary data from the customers or may have access to very less amount of data, the proposed method is very helpful to generate the data in required quantity for accurately fine tuning the parameters.\\n\\n\\u2022 How effectively the proposed \\\"Retro-Synthesis Data\\\" can represent the desired data classes as compared to random data is depicted in Fig.1 of the paper. Also, When the actual training/testing data is not available, instead of using the random data the benefits of resorting on the proposed \\\"Retro-Synthesis Data\\\" is shown in Fig.5 of the uploaded rebuttal version. From the sensitivity plot in Fig. 5 it is evident that there is a clear match between the layer sensitivity index plots of the proposed retro-synthesis data (red-plot) and the ground truth data (green plot) whereas huge deviation is observed in case of random data (blue plot). Hence it can be concluded that the proposed retro-synthesis data generation scheme can generate data with similar characteristics as that of ground truth data and is more effective as compared to random data.\\n\\n\\u2022 1) Despite the fact that the proposed approach achieves marginal improvement over state-of-the-art methods in case of models with batchnorm layers, it should be noted that the mentioned state-of-the-art methods achieve accuracy close to FP32 accuracy and at that level it becomes exponentially harder to achieve accuracy gains. Hence in our humble opinion achieving those margins is a significant improvement as compared to state-of-the-art methods. Also the effectiveness of the proposed method is evident in case of ResNet-18 and ResNet-50 models where it outperformed FP32 accuracy.\\n2) The idea behind designing the approach independent of the batchnorm layers is to avoid any limitation that refrains anyone from using the proposed method. As mentioned in [1], the major challenge with the commonly used normalization layers in modern networks require certain statistical independence assumptions to hold and large enough batch size or channel number for precise estimation of such statistics. This drawback significantly limits their applications to robust learning [2], contrastive learning [3], implicit models [4], object detection etc. \\nThe ISONETs [1] which does not have batchnorm layers may stand as inspiration for many such model designs in future, and as stated in our paper, the existing state-of-the-art data free quantization methods fail to quantize this model.\\n\\n\\u2022 1) The proposed Hybrid Quantization technique is an unique approach encapsulating both per-tensor & per-channel schemes in it by judiciously choosing the respective scheme for each layer based on the computed sensitivity for accuracy loss or performance gain. Whereas the proposed Non-uniform quantization method is a per-tensor based scheme with varied bins/steps allocated for respective weight ranges in each layer to achieve better accuracy as compared to per-tensor uniform quantization method. Hence in our opinion the proposed two methods follow different approaches unlike the mentioned precision selection and channel splitting.\\n2) The mentioned performance improvement of 10-20% is measured by quantizing and benchmarking the respective models using fully per-channel, fully per-tensor and the proposed Hybrid Quantization approaches on Samsung Galaxy S20 mobile platform having Qualcomm Hexagon DSP hardware.\"}",
"{\"title\": \"Updated version of the paper\", \"comment\": \"Thanks to All the reviewers for reviewing our work and providing constructive feedback. As per the reviewers' comments, we have updated our manuscript with the following details in the Appendix section to provide better clarity about the proposed method and to demonstrate its effectiveness in comparison to state-of-the-art methods.\\n\\n\\u2022 Figure-4 with the sensitivity plot describing the respective layer's sensitivity for per-tensor and per-channel quantization schemes in case of MobileNet-V2 model. \\nThe plot clearly shows that only a few layers in the MobileNetV2 model are very sensitive to the per-tensor scheme and other layers are equally sensitive to either of the schemes. Hence we can achieve better accuracy by just quantizing those few sensitive layers using per-channel scheme and the remaining layers using per-tensor scheme. \\n\\n\\u2022 Figure-5 with sensitivity plot describing the respective layer's sensitivity for the original ground-truth dataset, random data, and the proposed retro-synthesis data for ResNet-18 model quantized using the per-channel scheme.\\nFrom the sensitivity plot, it is evident that there is a clear match between the layer sensitivity index plots of the proposed retro-synthesis data (red-plot) and the ground truth data (green plot) whereas a huge deviation is observed in the case of random data (blue plot). Hence it can be concluded that the proposed retro-synthesis data generation scheme can generate data with similar characteristics as that of ground truth data and is more effective as compared to random data.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Thanks to the reviewer for the valuable feedback and suggestions. Please find our responses below for respective queries\\n\\nComments about the method [Author\\u2019s Response]: Thank you for introducing the reference [1]. We will definitely try to contrast our work with [1] in the final submission version else will add it as a part of our future work scope.\\n\\n\\nWe agree that there are methods like [2] to find correlation between the weight distribution statistics and activation statistics. However as mentioned in Figure-2 of (Nagel et al. 2019) incase of models like MobileNetV2, where we observe huge variation in weight ranges across the channels of some layers, the correlation may not hold good. In such cases, we need a dataset to accurately estimate the activation ranges. As it is not possible to get access to the dataset always, the proposed retro-synthesis scheme is very much needed.\\n\\nHybrid quantization [Author's response]:The proposed Hybrid quantization scheme is a novel method that analyzes a combination of per-channel and per-tensor approaches for achieving better accuracy and performance. In this scope of the paper we have analyzed the proposed hybrid quantization scheme for 8-bit precision. As per your suggestion we try to apply the method in [3] to analyze our hybrid approach for lower bit precision (6, 4 and 2 bits) as well as a future scope of work. Thank you for the suggestion. \\n\\nNon-uniform quantization [Author\\u2019s response:]:\\n\\n(1) The two obvious choices that strike the thought are K-means and Lloyd\\u2013Max quantizer. However, one of the key features of CNN is the large numbers of weights and these weights ranges are not very wide and these have similar values. Hence For K-means it will take a lot of time to reconstruct a codebook as distance calculation for all weights is not efficient. On the other hand, Lloyd-max quantizer performance is also affected by the number of data points when performing an integrated calculation in the actual computing environment. Hence we have chosen the proposed method in contrast to the above methods.\\n\\n(2) Only the weights are quantized using the proposed non-uniform approach, the activations are quantized using asymmetric uniform quantization.\", \"references\": \"Markus Nagel, Mart van Baalen, Tijmen Blankevoort, and Max Welling. Data-free quantization through weight equalization and bias correction. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1325\\u20131334, 2019\\n\\nPlease let us know for any further queries.\"}",
"{\"title\": \"A nicely written paper, but it might need more novelty\", \"review\": [\"This paper proposed three different techniques to improve the quality of the post-training quantization (PT) results. The main claim is about Retro-Synthesis Data, which allows the calibration of the quantization parameters without the training data. There are two additional techniques, Hybrid Quantization and Non-Uniform Quantization, to increase the accuracy of PT.\", \"Although the overall paper is nicely written, the contents might need more novelty for being accepted in ICLR. Here are a few concerns related to it.\", \"The \\\"Retro-Synthesis Data\\\" scheme sounds interesting, but not sure how much practical impact it might bring in. The authors assumed the model deployment scenario where the full-precision model is given without any training data. But it does not necessarily mean that the unlabeled data is also not available; in most cases, test data (without labels) from the deployed application is available for tuning of parameters (E.g., Sec 4 of [Sun et al., NeurIPS19]). In such a case, it is not clear how much benefit we can expect from \\\"synthesizing\\\" the test data for tuning quantization parameters.\", \"Even if we believe that synthesizing the test data is necessary, it is not clear why \\\"Retro-Synthesis Data\\\" is well suited for the post-training quantization. Although the approach is interesting, how much representative the data created by the proposed method is for being used for the parameter tuning? It would be highly desirable to explain in the derivation of \\\"Retro-Synthesis Data\\\" how it can provide information particularly useful for better post-training quantization.\", \"The strong empirical evidence might say more than just a detailed explanation. But unfortunately, the performance gain in the experimental results seems to be marginal. The authors claim that the proposed method is good for the models without batchnorm, but it does not sound very convincing given the fact that batchnorm is extremely popular these days...\", \"The additional techniques, Hybrid Quantization and Non-Uniform Quantization, seem to be incremental from the previous work about quantization-sensitivity and outlier aware controls of neural networks (such as precision selection and channel splitting). Also, the authors claimed that there is 10~20% performance gain from Hybrid Quantization, but without much justification on how such hardware performance is measured/estimated.\", \"To sum, this is a well-written paper but it might need more novelty for being accepted in ICLR.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"OK ideas that come across as incremental; good numbers but questionable science.\", \"review\": \"This paper considers the problem of data-free post-training quantization of classfication networks. It proposes three extensions of an existing framework ZeroQ (Cai et al., 2020): (1). in order to generate distilled data for network sensitivity analysis, the \\\"Retro Synthesis\\\" method is proposed to turn a random image into a one that represents a desired class label without relying on batch norm statistics like in ZeroQ; (2). a hybrid quantization strategy is proposed to optionally provide finer-grained per-channel quantization instead of the typical per-layer quantization; (3). a non-uniform quantization grid is proposed to better represent quantized weights, instead of uniform quantization as in ZeroQ. Empirical evaluation demonstrate the effectiveness of the proposed approach.\\n\\n==========================================================\", \"pros\": \"1. The proposed \\\"Retro Synthesis\\\" approach broadens the scope of ZeroQ to a wider range of neural network architectures by lifting the requirements of having batch normalization layers, and thus may significantly extend the practical applicability of data-free post-training quantization.\\n2. The proposed approach seems to run very fast (but on the same order of magnitude as ZeroQ).\\n\\n\\n==========================================================\", \"cons\": \"1. The significance of the contribution appears limited in scope, as the three proposed methods (especially the latter two, \\\"hybrid quantization\\\" and non-uniform quantization) read more like incremental modifications to components of the existing ZeroQ framework. As far as I understood, the overall approach of this paper follows the ZeroQ framework, e.g., uses the generated distilled data in the same way, also computes the effect of quantization based on KL divergence, and allocates per-layer/channel bit precision using the same Pareto frontier approach. The proposed \\\"Retro Synthesis\\\" is the main innovation, but is rather poorly explained (what's the motivation for having three components L_BN, L_G, L_C, to the total loss, and are they really weighted equally in the experiments?) and lacks insight into why it works. And it is questionable whether the remaining two proposals count as contributions: \\\"hybrid quantization\\\" is a simple heuristic for deciding when to use per-channel vs per-layer quantization, and non-uniform quantization is a slightly more sophisticated quantization grid (simply divides weights into quartiles and does uniform quantization in each quartile). I have no issue with a simple method if it's well motivated and works well, but:\\n2. Moreover, the experiments are plagued with a lack of careful analysis of the proposed methods and details for reproducability. Since the three proposed improvements focus on different aspects of ZeroQ, ablations that analyze the effect of each in isolation (i.e., baseline ZeroQ + proposed hybrid quantization, baseline ZeroQ + proposed non-uniform quantization) are crucial for evaluating their contributions. Details such as how the pre-trained models were obtained (were they trained from scratch, or pre-trained and obtained from the original ZeroQ authors) are not provided, which can significantly impact the empirical results and how they can be understood. The issue is not specific to this paper -- the field of neural network compression can benefit from better reproducability and more robust evaluation, and particularly methods that operate post-training can evaluate on a shared repository of pre-trained models and report accuracy loss.\\n\\n==========================================================\\n\\nComments & Questions:\\n1. Since the paper argues that \\\"it is possible to generate image data with similar statistics\\\" based on class scores (under the proposed \\\"Retro Synthesis\\\" approach), it makes sense to validate this is indeed the case qualitatively by comparing against the alternative \\\"ground truth\\\" approach based on matching batch norm statistics, e.g. by comparing the resulting model layer sensivity on distilled data produced by these two approaches, like in Figure 2 of the ZeroQ paper.\\n2. What's the Gaussian loss (LG) component of the main loss function (eq (1))? Similarly, step (e) of algorithm 1 is unclear -- what is \\\\mu_0' and \\\\sigma_0'?\\n3. Maybe I'm missing something, but wouldn't something obvious like k-means perform just as well (if not better) compared to the proposed non-uniform quantization approach based on uniformly quantizing within quartiles?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Post-training quantization without dataset access to preserve privacy\", \"review\": \"This work uses post-training quantization without access to training data for privacy concerns. Instead, useful statistics are estimated using a retro-synthesis data obtained from the FP baseline. I have a few comments, some concerns and some suggestions I think can be used to improve this work.\", \"overview_of_prior_work\": \"Most competing approaches are included and representative prior arts are mentioned which is good. In the descussion about post-training quantization methods, the authors conclude that prior arts observe accuracy degradations. There have been works on post-training quantization that theoretically predict such degradation and overcome it by increasing precision as needed using the concept of noise gains [1]. I think the authors should contrast such work with theirs. It is my impression that noise gains in [1] can be used to improve the presented method.\", \"comments_about_the_method\": \"\", \"retro_synthesis_data_generation\": \"The retro-synthesis data is used to determine activation ranges which are useful for quantization. This is a clever method. I wonder if that is really necessary though. Can't we follow an analysis similar to [2] in order to predict activation statistics from weight statistics (which are available)?\", \"hybrid_quantization\": \"It seems the concept of per-tensor quantization has already been studied in [3] and also use the concept of noise gains as above to analytically determine the required number of bits for each tensor. I think this method can be useful to improve/validate the proposed hybrid quantization technique.\", \"non_uniform_quantization\": \"\", \"i_have_two_issues\": \"(1) why not simply use the Lloyd-Max algorithm (which optimizes non-uniform quantization), (2) it is unclear how activations are quantized.\\n\\nThe experimental results look good.\\n\\n[1] Sakr, C., Kim, Y., & Shanbhag, N. Analytical guarantees on numerical precision of deep neural networks. In International Conference on Machine Learning, ICML 2017.\\n\\n[2] He, K., Zhang, X., Ren, S., & Sun, J. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. In Proceedings of the IEEE international conference on computer vision, CVPR 2015.\\n\\n[3] Sakr, C., & Shanbhag, N. Per-tensor fixed-point quantization of the back-propagation algorithm. In 7th International Conference on Learning Representations, ICLR 2019.\", \"post_rebuttal_comments\": \"I thank the authors for their feedback. I have no modification to make to my original review.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Official Blind Review #4\", \"review\": \"The paper proposes a data-independent post-training quantization scheme by generating a faux dataset without depending on the BN layer statistics of the FP32 model. The authors also introduce two variants of post-training quantization methods, hybrid quantization and non-uniform quantization methods.\", \"strengths_of_the_paper\": [\"The proposed quantization method is practically applicable to privacy-constraint applications because no training data are required and the method works for any model.\", \"The proposed Hybrid Quantization scheme addresses a couple of benefits, one is for improving accuracy and the other is for faster inference time. Both of them are important metrics for practical applicability.\", \"The paper is well written and easy to follow.\"], \"weaknesses_of_the_paper\": [\"Novelty: The novelty is moderate but not strong enough. Specifically, (1) a zero-shot quantization framework to generate a synthetic dataset has been studied in the previous literature ZeroQ (using the distilled data engineered to match the statistics of BN layers to perform post-training quantization). But this paper employs an alternative faux dataset, called by the Retro-Synthesis Data, that does not depend on the BN layer statistics of the original data, which should be appreciated. Moreover, most of DNN models employ the BN layer in their construction, what is the merit in that case? (2) The same layer sensitivity metric as ZeroQ, the KLD between the original model and the quantized model has been used in the proposed Hybrid Quantization scheme. The main idea behind Hybrid Quantization using the KLD to determine whether a layer is suitable to be quantized using the per-tensor or per-channel scheme, seems to be same. Overall, the contribution is a bit incremental and seems not novel to me compared with ZeroQ and DFQ.\", \"Evaluation: Further experiments and more ablation studies should be done. The provided experimental results are too weak to support the strength of using the retro-synthesis data.\"], \"detailed_comments\": \"(1) Algorithm 1 to generate retro-synthesis data seems to have a dependency on the target class(C). If there are many classes to be classified, is the time complexity also increasing? The paper states that generating the retro-synthesis data takes 10~12 sec. Is this for CIFAR-10 or ImageNet dataset? The proposed scheme seems to be more time-consuming than ZeroQ's 3 sec for generating the distilled data. In the ZeroQ paper, the end-to-end quantization of ResNet50 on ImageNet data takes almost 30 sec, which takes 3 sec to generate the synthetic data, 12 sec to estimate the sensitivity for all layers, and 14 sec to perform Pareto Frontier optimization etc. Could you provide the detailed timing breakdown on which experimental setting?\\n\\n(2) Computing the KLD in Algorithm 2 has more timing O/H than ZeroQ since it is executed twice for both per-tensor and per-channel quantization schemes per layer. Then does it take time twice more compared with ZeroQ? Could you provide the detailed timing breakdown or time O/H evaluation on your Hybrid Quantization configuration? Is the time O/H ignorable compared to the benefit of faster inference time by exploiting the hybrid quantization scheme? Approaches to achieve the goal of optimizing the inference time cause a conflict of interest.\\n\\n(3) A threshold value Th is sub-optimal and determined heuristically to decide either per-channel or per-tensor quantization. Is there any way to find an optimal value? The paper uses Th=0.001, but there is little improvement over Th=0. Could you explain why this happens? Further ablation study on different Th values should be provided. In the 8-bits setting, there seems no significant difference b/w per-channel and per-layer schemes due to its perturbation effect. More explanation on the result is required.\\n\\n(4) Results (in Table 3) of comparing the proposed method to ZeroQ with W8A8, W6A8, and W4A8 on ResNet model with CIFAR-10 are too weak to support the strength of using the retro-synthesis data. There seems to be a bit improvement over ZeroQ while the proposed scheme takes more time(10~12 sec for the retro-synthesis data) to generate a synthetic dataset compared with ZeroQ(3 sec for the distilled data). \\n\\n(5) Results on ImageNet data (in Table 4 and 5) have a weak contribution if the bitwidth used in the experiment is 8 bit(You should describe a precision setting in the result tables). More aggressive setting of bitwidths, say 6 or 4 bits, should be provided comparing with the SOTA post-training quantization schemes. \\n\\n(6) The paper also presents another variant of using the retro-synthesis data, a non-uniform quantization scheme. Further evaluation (in Table 5) should be done to support its superiority by comparing it with the SOTA non-uniform quantization schemes, e.g. PoT, APoT, DDQ, etc. Especially, the non-uniform quantization gets more important in the weaker representation range, say low-precision quantization schemes below 8 bits. Future more experiments should be provided.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
EeeOTYhLlVm | EpidemiOptim: A Toolbox for the Optimization of Control Policies in Epidemiological Models | [
"Cédric Colas",
"Boris Hejblum",
"Sébastien Rouillon",
"Rodolphe Thiebaut",
"Pierre-Yves Oudeyer",
"Clément Moulin-Frier",
"Mélanie Prague"
] | Epidemiologists model the dynamics of epidemics in order to propose control strategies based on pharmaceutical and non-pharmaceutical interventions (contact limitation, lock down, vaccination, etc). Hand-designing such strategies is not trivial because of the number of possible interventions and the difficulty to predict long-term effects. This task can be cast as an optimization problem where state-of-the-art machine learning algorithms such as deep reinforcement learning might bring significant value. However, the specificity of each domain - epidemic modelling or solving optimization problems - requires strong collaborations between researchers from different fields of expertise.
This is why we introduce EpidemiOptim, a Python toolbox that facilitates collaborations between researchers in epidemiology and optimization. EpidemiOptim turns epidemiological models and cost functions into optimization problems via a standard interface commonly used by optimization practitioners (OpenAI Gym). Reinforcement learning algorithms based on Q-Learning with deep neural networks (DQN) and evolutionary algorithms (NSGA-II) are already implemented. We illustrate the use of EpidemiOptim to find optimal policies for dynamical on-off lock-down control under the optimization of death toll and economic recess using a Susceptible-Exposed-Infectious-Removed (SEIR) model for SARS-CoV-2/COVID-19.
Using EpidemiOptim and its interactive visualization platform in Jupyter notebooks, epidemiologists, optimization practitioners and others (e.g. economists) can easily compare epidemiological models, costs functions and optimization algorithms to address important choices to be made by health decision-makers. | [
"epidemiology",
"covid19",
"reinforcement learning",
"evolutionary algorithms",
"multi-objective optimization",
"decision-making",
"toolbox"
] | Reject | https://openreview.net/pdf?id=EeeOTYhLlVm | https://openreview.net/forum?id=EeeOTYhLlVm | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Gi6bZIdc3Ie",
"pBftIuW44KM",
"HBDBz26BfT",
"yChfkjP3W5V",
"XgtR106rQ8",
"mq4m7spuJKf"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040512600,
1605703168760,
1605703111651,
1604270269629,
1603925742692,
1603356448163
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3672/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3672/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3672/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3672/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3672/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The reviewers agree that the contributions may not be relevant to the ML research community or perhaps are a poor fit for the venue, but otherwise find the work potentially useful and addressing a timely topic. Because the paper focuses on a simulation environment for existing epidemiological models, reviewers comment that the technical and methodological novelty is limited.\"}",
"{\"title\": \"Answer to R3\", \"comment\": \"Here we answer specific comments and questions from R3.\\n\\n**About the study of other intervention modalities:**\\nStudying other forms of intervention strategies is indeed a very interesting topic. However, to do this reliably, we need to be able to predict the impact of these intervention strategies on the epidemic. This means that we need data on the propagation of the epidemic in a region where this strategy was implemented. Some approaches use an LSTM-based model conditioned on intervention strategies to predict the dynamics of the epidemic. The prediction model is trained on various regions, where each region implemented specific intervention strategies. Based on this model, one might try to train an intervention policy. One potential pitfall in doing so is that the learning algorithm will try to implement intervention strategies in regions where it was never applied in real life. This leads to predictions that are out of the distribution of the training data. The \\u201cclosing schools\\u201d intervention might have a different impact depending on the country and relying on the generalization of a recurrent network to predict its impact in a new country where it was never applied might lead to poor predictions of the epidemic dynamics, which in turn will lead to poor intervention strategies. In our case, we only implemented a binary lockdown intervention modality, but we can be more confident about its impact on the epidemiological model because we already observed it in the past in the same regions (first wave of spring 2020). We just acquired recent data on the period of May 11 - October 30 that corresponds to a period of medium-level contact restrictions that was not as intensive as the lockdown periods (early spring and fall in France). This fresh data will help us to model the impact of such medium-level restrictions on the epidemic and will help us refine our model.\"}",
"{\"title\": \"Main answer to all reviewers\", \"comment\": \"We would like to thank the reviewers for their feedback that will help refine this paper. Because the three reviews share most of their discussion points, we will write a common answer.\\n\\nAll reviewers noted the relevance and importance of the topic and two reviewers noted that the paper is clearly written and organized. However, all reviewers are concerned with the lack of novelty or technical contributions of this paper and two of them think that ICLR might not be the best venue for this type of work.\\n\\nRegarding technical novelties and experiments, our paper introduces a novel variant of DQN that considers constraints via the training of additional Q-networks and samples its own constraints during training. It proposes a set of experiments investigating the optimization of lock-down policies with 4 different algorithms, where most past research only consider one or two. Besides, we show that they can be complementary (e.g. NSGA is efficient in low economic costs, DQN in low health costs). Although these two contributions may appear as limited with respect to the current ICLR standards (as noted by the reviewers), we argue below that the paper presents other forms of less usual but important contributions to the ML community.\\n\\nThe EpidemiOptim paper presents conceptual and organizational contributions. We formalize the problem of learning intervention strategies to mitigate epidemics propagations as a multi-objective optimization problem. Going further, we propose a modular view of the different components involved in such problematics: \\n* The learning environment should take the form of a Gym environment that contains three modules: an epidemiological model, a multi-objective cost function and action modalities.\\n* The optimization algorithm should be multi-objective.\\nIn contrast to past approaches, this library does not simply present a novel Gym environment. It facilitates the addition of new modules by presenting a general interface that can be used to answer a vast list of research questions (e.g. impact of model uncertainty on optimal strategies, impact of the non-episodic nature of the underlying task, addition of new intervention modalities, etc.). In addition to the modules it contains, we provide relevant documentation as well as visualization and comparison tools to facilitate collaborative use. We agree that, in its current stage, the EpidemiOptim library contains a limited number of module implementations: 1 epidemiological models, 2 cost functions and 4 algorithms. However, we aim at presenting a collaborative library where anyone can easily contribute and add modules. \\n\\nBecause of the limited amount of novel technical contributions, R2 and R3 argue that this paper should not be published in an ML conference such as ICLR. We agree that this type of paper does not currently fit the current distribution of papers in ML venues. However, we believe that it should be the case because making progress in research is not only about the results but also about the tools we give ourselves to tackle interesting questions. This paper is about that: presenting the necessary tools to tackle the collaborative study of intervention strategies in the context of epidemics. The particular problem of the optimization of intervention strategies requires the attention and participation of optimization practitioners, most of whom attend ML conferences such as ICLR and are not familiar with the family of problems addressed by EpidOptim, which include novel challenging scientific dimensions and are societally important. We argue that accepting papers presenting toolboxes such as ours should be about judging their relevance in terms of the future organization of the research topic they tackle and judging their potential future impact in research and in the real world. In this scenario, accepting a toolbox paper would be a recommendation, stating that this toolbox presents a rational approach to organize research in this particular direction. Indeed, one cannot ask such a library to already be used by all the community and to already have impact in the real world (R1). If we submit this paper to one of the main ML venues, it is precisely because we think that, to lead to real world applications, we need the involvement of expert ML researchers focusing on this type of problems with the tools we propose to tackle them.\\n\\n\\nR1 and R3 suggest extending the set of analyses, to compare more agents and disease scenarios. R2 suggests investigating the handling of model uncertainty, new multi-objective algorithms or the explainability of intervention policies. R3 also suggests to study sequential combinations of policies or other intervention modalities. This demonstrates their interests in a diverse set of research questions that could be tackled via the EpidemiOptim framework. Of course, a single group cannot investigate them all, but a community of researchers working with a common and collaboratively-designed tools could.\"}",
"{\"title\": \"Well written and easy to read paper. Unfortunately, work lacks novelty and real impact. Still seems premature and could benefit from more environments, agents, analysis and adoption.\", \"review\": \"**Summary**\\nThis paper introduces a library that implements 1 infectious disease model and 3 agents in a gym environment. The authors provide analysis for a covid-19 lock down scenario using their library.\\n\\n**Strengths**\\nThe paper addresses an important and timely topic. The paper is easy to read and follow. The authors have open sourced their library and it seems to be well documented.\\n\\n**Weaknesses**\\nThe main weakness of the paper is the strength of the contributions. The authors\\u2019 main contributions are a gym environment for a specific epidemiological model.\\n\\nUnfortunately, the novelty is somewhat lacking. As the authors mention in the conclusion section this line of work has been heavily researched. There are many infectious disease models and simulation environments already out there. E.g. a gym like interface has even been already open sourced earlier this year https://github.com/google/ml-fairness-gym/blob/master/environments/infectious_disease.py .\\n\\nIn terms of impact, the number of environments and agents is also quite limited at this stage. It\\u2019s also unclear if there is likely to be adoption by serious policy makers. Absent such impact statements it remains as one of many simulation frameworks out there.\\n\\n**What could make the paper better**\\na) Many more environments and agents need to be implemented such that this library has the potential to become the standard for infectious disease simulation.\\nb) A lot more analysis comparing agents and disease scenarios that truly unearth interesting scientific observations. \\nc) Real world impact statements of adoption by policy makers and governments.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Valuable starting point, but not yet sufficient contribution\", \"review\": \"This paper introduces OpenAI Gym environment for RL optimization of epidemic containment policies. The envirnoment currently contains an example SEIR model parameterized for COVID-19, along with a simple economic model to evaluate the lost productivity due to lockdowns. Some experimental results are shown where different deep learning algorithms are used to optimize intermittent lockdown policies.\\n\\nOn the positive side, connecting the epidemiological and ML communities is definitely an important goal. Developing open-source tools to make this interaction easier is valuable. \\n\\nI'm not sure that this is an appropriate paper for ICLR though. It mostly takes a preexisting epidemiological model and exposes it in the OpenAI gym interface, without a strong research contribution. I believe that there are technical issues in the development of a platform for health policy optimization which are likely to result in research contributions, for example dealing with uncertainty in models/parameters, developing more efficient methods for multiobjective optimization, or providing explanations of policies (particularly since experts are unlikely to implement a RL policy verbatim, but rather try to synthesize its recommendations with other considerations/sources of information). I hope that the authors continue to refine this platform and tackle these or other issues in the future.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Even if the work is interesting from a societal point of view, it is not enough to justify it as an ML study.\", \"review\": \"The authors provide a python tool able to model epidemics development as optimization problems. This allows easing the work of decision-makers when faced with the problem of deciding new lockdowns. The model has been applied to real-world data to evaluate the consequences, in terms of deaths and per-capita loss, of a new lockdown.\\n\\nThe paper presents an interesting study on the dynamics of the epidemic. In my opinion, the development of a tool is relevant for medical and decision-making studies but is not enough novel or significant for the ML field. I think that this kind of analysis is better suited for a more applicative venue, while the novelty provided in your work is not enough to justify a publication at ICLR.\\n\\nThe paper is clear and well written. I appreciate the use of a real case-study and the following analysis. I think that the only problem is that the venue chosen by the authors does not fit its purpose.\", \"questions\": [\"Did you also try your model on different datasets? I think that a more wide experimental campaing might improve the value of what have been proposed here\", \"What about other forms of prevention other than lockdowns? For instance, it is possible to model tracing or testing as prevention methods in your modeling? This would instead make the model more flexible and allow decisional organs to act in a more flexible way.\", \"Another interesting study might include the use of different methods for contagion prevention and their use in a joint or sequential manner in order to understand what are the policies over time which might be the most promising for a tradeoff healt/GDP.\", \"----------------------------------------------------------------\"], \"after_rebuttal\": \"the paper seems interesting but, as I already mentioned and as other reviewers pointed out, the main concerns about this paper are novelty and relevance to the ML community.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
V8jrrnwGbuc | On the geometry of generalization and memorization in deep neural networks | [
"Cory Stephenson",
"suchismita padhy",
"Abhinav Ganesh",
"Yue Hui",
"Hanlin Tang",
"SueYeon Chung"
] | Understanding how large neural networks avoid memorizing training data is key to explaining their high generalization performance. To examine the structure of when and where memorization occurs in a deep network, we use a recently developed replica-based mean field theoretic geometric analysis method. We find that all layers preferentially learn from examples which share features, and link this behavior to generalization performance. Memorization predominately occurs in the deeper layers, due to decreasing object manifolds’ radius and dimension, whereas early layers are minimally affected. This predicts that generalization can be restored by reverting the final few layer weights to earlier epochs before significant memorization occurred, which is confirmed by the experiments. Additionally, by studying generalization under different model sizes, we reveal the connection between the double descent phenomenon and the underlying model geometry. Finally, analytical analysis shows that networks avoid memorization early in training because close to initialization, the gradient contribution from permuted examples are small. These findings provide quantitative evidence for the structure of memorization across layers of a deep neural network, the drivers for such structure, and its connection to manifold geometric properties.
| [
"deep learning theory",
"representation learning",
"statistical physics methods",
"double descent"
] | Accept (Poster) | https://openreview.net/pdf?id=V8jrrnwGbuc | https://openreview.net/forum?id=V8jrrnwGbuc | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"78hZ_nc4EqU",
"pC06qjQc5f",
"kN0N5Ep_NcS",
"rimCQW7yYki",
"z6H6g2au5v3",
"7FyJniTl_8F",
"DM5mETmhMJ",
"tNw-vUDSOPh",
"V151H2sR6O2",
"djl0JLE9qT",
"hXWN4UqX-yh"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040458987,
1606143426063,
1605923645590,
1605922805296,
1605922705802,
1605922662943,
1605922553074,
1604115100762,
1603921826284,
1603894916526,
1603744402075
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3668/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3668/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3668/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3668/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3668/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3668/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3668/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3668/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3668/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3668/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper offers novel insights about memorization, the process by which deep neural networks are able to learn examples with incorrect labels. The core insight is that late layers are responsible for memorization. The paper presents a thorough examination of this claim from different angles. The experiments involving rewinding late layers are especially innovative.\\n\\nThe reviewers found the insights valuable and voted unanimously for accepting the paper. The sentiment is well summarized by R2: \\\"The findings of the paper are interesting. It shows the heterogeneity in layers and training stage of the neural net\\\".\\n\\nI would like to bring to your attention the Coherent Gradients paper (see also R1 comment). This and other related papers already discusses the effect of label permutation on the gradient norm. Please make sure you discuss this related work. As a minor comment, please improve the resolution of all figures in the paper. \\n\\nIn summary, it is my pleasure to recommend the acceptance of the paper. Thank you for submitting your work to ICLR, and please make sure you address all remarks of the reviewers in the camera-ready version.\"}",
"{\"title\": \"Precisions\", \"comment\": \"Thank you for your response.\\n\\nIt's been suggested to me that a discussion of Coherent Gradient (https://openreview.net/forum?id=ryeFY0EFwS) may be relevant in this paper, due to the similarity of the intuitions that are gained from this paper and yours (although I'd argue _how_ those results are obtained is quite different).\\n\\nFigure 2B makes sense now, but, if I may suggest another improvement: to choose points to plot, you could choose the _intersection_ of points with permuted labels 1-10 and points with restored labels 1-10 (if that intersection is big enough) This should make the umap positions identical, avoid visual clutter, and still be a random selection due to the random process of permuting labels.\"}",
"{\"title\": \"Overall response\", \"comment\": \"We thank all reviewers for their insightful and constructive comments. We appreciate that each reviewer found our approach and findings to be a useful step towards a better understanding of generalization and memorization in DNNs. In the posted revision, we have started incorporating some of the changes suggested by the reviewers. We have expanded the explanation (page 4) and illustrations (Fig 1B) of the replica-theory-based MFTMA method in the main paper. We have also begun incorporating some of the suggested changes on Figures 1, 2, 3, and 5. We will continue to work on the presentation of our figures to improve readability and visual clarity. We have responded to each reviewer's individual questions in our reviewer-specific responses.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the reviewer for their helpful suggestions and are glad that they found our experiments well-organized and findings about the heterogeneity of layers and training stages interesting.\\n\\nHere we respond to some of the reviewer\\u2019s questions/comments:\\n\\n-**\\\"It is better to further explain the intuition of the Manifold Geometry Metrics. The current Figure 1(B) is not very clear.\\\"**\\n\\nWe agree that the intuition behind the Manifold Geometry Metrics should be clarified, including enhancements to the readability and explanation of Fig. 1(B). We have begun to expand these explanations using some of the additional space provided (see the updated document) and will continue to hone this section for clarity.\\n\\n -**\\\"In Manifold Capacity, what do P and N exactly mean? Is this P the number of classes as used elsewhere?\\\"**\\n\\nHere and elsewhere we use P as the number of classes, N as the number of features, and M as the number of examples per class.\\n\\n-**\\\"The paper explains that training on permuted examples, the network can learn generalizable representations at the initial training stage because the gradient ignores permuted examples. But why in the later training stage, the early layers and later layers show different generalization properties?\\\"**\\n\\nThis is a very interesting question and we do not yet have a good theoretical explanation for this effect. We are so far only able to explain the behavior in the early epochs of training (where permuted examples are largely ignored) but in later epochs we can no longer use the same methods as our linearization about initialization is no longer valid. Additionally, our empirical results on measuring the gradients on unpermuted versus permuted examples suggests that permuted labels contribute strongly to the training in all layers in the later epochs, perhaps owing to the increased nonlinearity of the network after many epochs of training. We speculate that whatever the mechanism that results in the differentiation between early and late layers in the late epochs of training is, it is probably qualitatively different from those that govern the behavior early epochs. We certainly hope that this question can be explored and answered in future works.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We are pleased that the reviewer found our observations interesting, and thank the reviewer for their suggestions for improving the presentation of the MFTMA method.\\n\\nIn response to some of the reviewer\\u2019s comments:\\n\\n-**\\\"I found it hard to understand MFTMA without referring to the appendix A. It would be nice to expand the explanation of MFTMA in the main paper. In addition, it would be good to further explain Fig 1. B which contains a lot of information.\\\"**\\n\\nWe agree with the reviewers that the explanation of the MFTMA method in the main text should be expanded. In the posted revision, we used some of the additional page to further explain this technique and the intuition behind it. We also expanded the size of Fig. 1.B for better readability, and in the next iteration, we plan to expand the explanation about Figure 1B to give intuition behind the manifold radius, dimension, and capacity quantities and how they are computed.\\n\\n-**\\\"Does the observation scale to larger dataset such as ImageNet?\\\"**\\n\\nTo demonstrate this, we also include results on the Tiny ImageNet which is larger than CIFAR100 both in the amount of data and in the size of the images. We also note that the behaviors we see when training without label noise on these datasets are similar to the trends observed when training on ImageNet without label noise, which can be seen in https://www.nature.com/articles/s41467-020-14578-5.\\n \\n-**\\\"Experiments are run for only one seed.\\\"**\\n\\nWe ran all our experiments with 10 random seeds, which varied the network initialization, the selection and labeling of the permuted examples, and the initialization of the MFTMA calculations. In most cases we found that our results were not very sensitive to this seed, hence error bars on our plots are fairly small and easy to miss. We will clarify this bit of methodology in the main text.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We are glad to hear that the reviewer found the claim regarding memorization in the later layers convincing, and also found it plausible that this is not due to vanishing gradients. We also wish to thank the reviewer for giving our manuscript such careful attention and finding a mislabeled figure in the supplemental information which we have corrected.\\n\\nBelow, we respond to some of the questions the reviewer had:\\n\\n-**\\\"Why do the authors consider only convolutional layers, not fully-connected layers, for the analyses? In the experiment of rewinding individual layers, the three FC layers are left untouched. Why?\\\"**\\n\\nAs is typical for image classification networks, most of the architectures we analyzed (AlexNet, and both of the ResNets) are constructed from purely convolutional, normalization, and pooling layers with a single FC layer at the output, while only VGG16 has a series of 3 FC layers before the output. For consistency across all networks we show results after convolutional layers or pooling layers, both of which can change the capacity while normalization layers can\\u2019t.\\n\\nFor the experiment of rewinding layers, we show the convolutional layers we analyzed with MFTMA for easier comparison with the MFTMA results. As the reviewer notes, rewinding the FC layers in VGG16 is a good suggestion since we see very little change in these layers across training, perhaps corresponding to the small gradients we see for these layers in Fig. 5. The rewinding experiments are running, and we will update the Si when ready. \\n \\n-**\\\"Is MFTMA the only method that can examine/verify the above finding?\\\"**\\n\\nTo our knowledge, MFTMA is currently the only method that can capture the relevant manifold geometry for classification. The creation of other/better methods of measuring the representational geometry is an important, though challenging, direction for further work in our view.\\n\\n-**\\\"Comments. At the first reading, I didn't understand what \\\"restored examples\\\" means, and it took me a while to understand it. The caption for Fig. A.7 has an error; CIFAR100 should be Tiny ImageNet.\\\"** Thank you for catching the error! We will correct them in the updated manuscript. We will also add clarifying explanations for 'restored examples' in the updated version.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for their thorough comments and suggestions to improve our manuscript, particularly for their detailed suggestions to improve the quality and readability of our figures. We also appreciate that the reviewer found several of the results interesting, and agrees that empirical study of these phenomena is valuable.\\n\\nIn response to some of the reviewer\\u2019s concerns:\\n\\n-**\\\"The setting explored here is somewhat artificial, (1) the requirement on a high enough epsilon (random label proportion) may not represent real use of DNNs (I write this having seen Fig A.8; this is also a common criticism of double-descent results) (2) the models trained here don't seem to exceed 40% testing accuracy, again not necessarily representing real use of DNNs (this is a bit surprising considering even models from back in 2013 had above 60% accuracy on CIFAR100).\\\"**\\n\\nThe reviewer is correct that the addition of label permutation noise and some training hyperparameters in our setup are different from standard training practice. These differences were necessary as we aim to study networks in which the memorization is unambiguous. Our experiment requires a set of randomly labeled examples which must be memorized if they are to be learned, and as noted in https://arxiv.org/abs/1705.10694, this requires slight modifications to the training procedure in the form of a larger batch size and a smaller learning rate. Both of these changes result in a decrease in generalization performance, though we note that the 40% test accuracy figure the review mentioned is the result of training on a dataset with 50% of it\\u2019s labels randomized. With smaller amounts of label noise we see much higher test accuracies, and we will include a plot of how test accuracy depends on the amount of label noise in the supplemental.\\n \\n-**\\\"Although the results of the paper do not hinge entirely on it, the reliance on MFTMA limits the interpretation somewhat: while an interesting tool, it's not clear to me that it allows us to make strong statements about the geometry of neural networks. In particular for the early layers, MFTMA may not be able to capture the geometry of features which might still be somewhat entangled yet possess a lot of richness.\\\"**\\n\\nIt is true that the MFTMA technique is more complex than other techniques for the analysis of representations (Linear probes, or comparative measures such as RSA, CCA, etc.) we believe it is the best tool for this purpose for two reasons:\\nIt is currently the only theory grounded technique that links geometric properties of the data (manifold radius, dimension) to a quantity relevant to classification performance (manifold capacity)\\nSince the results obtained via MFTMA depends on the geometry of the manifold, it doesn\\u2019t require a second train/test split unlike linear probes, and converges quickly with the number of samples.\\n\\nWe note that other measures such as intrinsic dimensionality [https://arxiv.org/pdf/1905.12784.pdf] or geodesics [https://arxiv.org/pdf/1511.06394.pdf] have been used to probe early layers' entanglement in the past, and we will add a discussion around this point in the manuscript in the next version. Nonlinear readout performance (such as quadratic readout, or with readouts with hidden layers) may also be an interesting measure of probing nonlinearly decodable information, in the highly entangled data regime. They are beyond the scope of the present work but certainly a very promising future direction.\\n\\n-**\\\"Something seems wrong with Figure 2B-middle two columns. \\u2026\\\"**\\n\\nWe appreciate the reviewer for inspecting this plot so closely. The plot is correct even though the points plotted for permuted and restored examples are different. The difference here is due to the downsampling of the number of classes. Here we show only the first 10 classes of CIFAR 100 to avoid visual clutter, and so the set of permuted examples with labels between 1-10 is different from the set of restored examples with labels between 1-10. If one were to create this plot with all 100 classes then, as the reviewer correctly points out, that umap plots for permuted and restored examples would be the same with only the colors differing. \\n\\n-**Many suggestions to improve the presentation of our figures**\\n\\nWe reiterate our appreciation for the well thought out list of suggestions for improving our figures. We have begun incorporating some of the suggested changes (see the revised manuscript, Figures 1, 2, 3, and 5) and will continue to work on the presentation of our figures to improve readability and visual clarity.\"}",
"{\"title\": \"Nice contribution in understanding generalization and memorization of deep neural networks\", \"review\": \"The paper empirically studies the reason for the phenomenon that deep neural networks can memorize the data labels, even the labels are randomly generated. New geometric measures by replica mean-field theory are applied in the analysis.\\n\\nThe findings of the paper are interesting. It shows the heterogeneity in layers and training stage of the neural net:\\n\\ni) Memorization occurs in deeper layers; rewinding the final layer to the early weights mitigates memorization.\\n\\nii) When memorization happens, the early layer still learn representations that can generalize.\\n\\niii) In the training, early activations stabilize first, and deeper layers weights stabilize first. \\n\\niv) Near initialization, the gradient is dominated by unpermuted examples.\\n\\nI have the following questions/comments:\\n\\n- It is better to further explain the intuition of the Manifold Geometry Metrics. The current Figure 1(B) is not very clear.\\n\\n- In Manifold Capacity, what do P and N exactly mean? Is this P the number of classes as used elsewhere?\\n\\n- The paper explains that by training on permuted examples, the network can learn generalizable representations at the initial training stage because the gradient ignores permuted examples. But why in the later training stage, the early layers and later layers show different generalization properties?\\n\\nIn general, this paper carries well-organized experiments. One shortcoming is that the paper does not provide a methodology to solve the generalization problem or further theoretical analysis of the observations. But the empirical discoveries are novel and can be beneficial to the deep learning community. \\n\\n###########\", \"updates\": \"Thanks for the authors' response. The modified version improves clarity. I think this paper provides nice observations and initial analysis to the community and can be beneficial to future work, so I recommend this paper to be accepted.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"New results providing an insight to understanding of generalization and memorization by DNNs\", \"review\": \"The authors apply MFTMA to DNNs trained on CIFAR with label noise to analyze their behaviors between generalization and memorization. Based on experimental results, they claim that what is involved in memorization are not lower layers but higher layers. This claim is convincing. Another claim that this is not caused by a vanishing gradient effect is plausible, too. I'm sure these results give some insights into understanding generalization and memorization by DNNs.\\n\\nQuestions. \\nWhy do the authors consider only convolutional layers, not fully-connected layers, for the analyses? In the experiment of rewinding individual layers, the three FC layers are left untouched. Why?\\n\\nIs MFTMA the only method that can examine/verify the above finding?\\n\\nComments. \\nAt the first reading, I didn't understand what \\\"restored examples\\\" means, and it took me a while to understand it. The caption for Fig. A.7 has an error; CIFAR100 should be Tiny ImageNet.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Make interesting observations\", \"review\": \"###\", \"summary\": \"This paper investigates memorization in deep neural networks (DNNs). Authors leverage mean field theoretic geometric analysis method (MFTMA) to analyze when and where memorization occurs in a DNN. Through empirical analysis, they show that i) generalizing feature are learned initially and that memorization happen later in training mostly in the top layers ii) we can mitigate memorization by rewinding the top layers parameters to earlier values. They also show that their MFTMA metrics can highlight the phenomena of double decent. Finally, they demonstrate that gradient descent initially ignores noisy example and focus on correctly labeled examples.\\n\\n###\", \"reasons_for_score\": \"I lean toward acceptance. This paper makes interesting observation regarding memorization of deep network, it performs a good empirical study which provide enough evidences for the different claims. Although, MFTMA could be a better explained in the main paper. \\n \\n###\", \"pros\": [\"As stated above, the paper makes interesting observation regarding memorization of deep network.\", \"It performs a thorough empirical study.\", \"###\"], \"cons\": [\"I found it hard to understand MFTMA without referring to the appendix A. It would be nice to expand the explanation of MFTMA in the main paper. In addition, it would be good to further explain Fig 1. B which contains a lot of information.\", \"Does the observation scale to larger dataset such as ImageNet ?\", \"Experiments are run for only one seed.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper analyses memorization in DNNs, from the lens of memorization = fitting random labels, and finds that it seems to happen in later layers. These results are obtained using the MFTMA framework, a manifold analysis tool, testing geometric properties of individual layers. The analysis also attempts to explain why such a phenomenon exists, and makes a few interesting observations.\\nThis paper does not propose any new algorithm, but instead settles some important questions by infirming or affirming past speculation on layer behaviour found in the literature.\", \"i_find_three_particularly_interesting_results_in_this_paper\": [\"later layers seem to be responsible for memorization, while early layers seem to converge last but consistently learn \\\"generalizing\\\" features (although this may not be true for other architectures)\", \"increasing the dimensionality of the network to induce double descent _decreases_ the manifold dimensionality of the last layer. This is consistent with overparameterization making everything smoother/flatter and more easily disentangleable in the last layer.\", \"for examples with the wrong class, gradients initially vanish (due to destructive interference), which seems to be a driving force for the initial good generalization performance.\"], \"downsides_of_the_paper\": [\"The setting explored here is somewhat artificial, (1) the requirement on a high enough epsilon (random label proportion) may not represent real use of DNNs (I write this having seen Fig A.8; this is also a common criticism of double-descent results) (2) the models trained here don't seem to exceed 40% testing accuracy, again not necessarily representing real use of DNNs (this is a bit surprising considering even models from back in 2013 had above 60% accuracy on CIFAR100).\", \"Although the results of the paper do not hinge entirely on it, the reliance on MFTMA limits the interpretation somewhat: while an interesting tool, it's not clear to me that it allows us to make strong statements about the geometry of neural networks. In particular for the early layers, MFTMA may not be able to capture the geometry of features which might still be somewhat entangled yet possess a lot of richness.\", \"I have some issues with the presentation of the paper\", \"This paper does not really introduce a novel lens on generalization or significantly new ideas (although I'd argue it formalizes existing ideas and properly tests them empirically).\"], \"on_the_value_of_the_contribution\": [\"I think having empirical evidence of the studied phenomena is valuable, more so than previous speculation on them.\", \"The empirical results presented here do open the door for new questions to be answered and may help focus the ongoing investigation of memorization and generalization in DNNs\"], \"additional_comments\": [\"Something seems wrong with Figure 2B-middle two columns. Aren't permuted and restored examples the same inputs X but with the corresponding Y changed? If this is the case, then their UMAP should be the same, the only difference between the second column and the third column should be the coloring of the points. I presume that the figure shows a different minibatch of Xs for these two columns; I would highly recommend not doing so and using the exact same inputs. It would be consistent with the text, and the presentation, e.g. Fig 1A.\", \"All Figures: the label fonts should be bigger. From the ICLR formatting guidelines: \\\"use 10 point type [for text]\\\", and \\\"all artwork must be neat, clean, and legible.\\\" Having to zoom in and out to be able to read figures properly hurts accessibility and legibility, which detracts from the quality of the paper. Packing text, results, and figures in an 8-page document can be hard, but synthesizing information, including visual information contained in figures, is an essential skill in conveying knowledge.\", \"-- Here are a few suggestions for this particular paper: Figure 1A seems unnecessary, the text conveys these 3 concepts clearly; Figure 1B is important and should take the entire width of the page, with legible fonts; Figure 2A's subplots all share the same X and Y axis, making their naming redundant and taking up space; Figure 2B's column labels are also repeated needlessly, taking up vertical space; Figure 3's X axis doesn't need individual layer name labels, and could be replaced with a single \\\"Layer depth\\\" label -- 3A and 3B also share this axis, leading to wasted vertical space (space that could be used to make fonts larger); idem for Figure 4A, individual layers do not need to be named, but rather the concept of layer depth can be conveyed with a properly labelled colorbar gradient -- 4CDE could be less wide and leave more horizontal space to make fonts larger.\", \"In Figure 5A, it's not immediately clear that the X axis are individual layers, the log(nabla) label should be on the colorbar rather than on top of the figure. I'd also suggest flipping the X and Y axis, as the X axis is typically used for time; this would allow there to have the three subplots side by side with a shared labelled colorbar on the right (matplotlib seems to be used here, see matplotlib.pyplot.subplots's sharex/sharey arguments for examples).\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
1UtnrqVUeNE | Detecting Misclassification Errors in Neural Networks with a Gaussian Process Model | [
"Xin Qiu",
"Risto Miikkulainen"
] | As neural network classifiers are deployed in real-world applications, it is crucial that their predictions are not just accurate, but trustworthy as well. One practical solution is to assign confidence scores to each prediction, then filter out low-confidence predictions. However, existing confidence metrics are not yet sufficiently reliable for this role. This paper presents a new framework that produces more reliable confidence scores for detecting misclassification errors. This framework, RED, calibrates the classifier's inherent confidence indicators and estimates uncertainty of the calibrated confidence scores using Gaussian Processes. Empirical comparisons with other confidence estimation methods on 125 UCI datasets demonstrate that this approach is effective. An experiment on a vision task with a large deep learning architecture further confirms that the method can scale up, and a case study involving out-of-distribution and adversarial samples shows potential of the proposed method to improve robustness of neural network classifiers more broadly in the future. | [
"Neural Network Classifier",
"Error Detection",
"AI safety"
] | Reject | https://openreview.net/pdf?id=1UtnrqVUeNE | https://openreview.net/forum?id=1UtnrqVUeNE | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"thJlf7DHlih",
"rDimkNS_DhS",
"Plj1LfCS_qn",
"y-H72FHwjZf",
"ge5FIxcHux",
"Gindb7IMd5m",
"H41tH4KtC3",
"Dot3eBPQlA9",
"8hzgjcJW-Lc",
"zH59rd_2BGQ",
"WBMKrZ7hKvz",
"yaY5db__tMY",
"8y86mQBwfLy"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610707959648,
1610040367452,
1606245956220,
1606245837397,
1606245690215,
1606245405591,
1606245101878,
1606245035514,
1606244675091,
1604105077680,
1603953956121,
1603841226951,
1603029206523
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Paper3666/Authors"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3666/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3666/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3666/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3666/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3666/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3666/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3666/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3666/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3666/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3666/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3666/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Reply to program chair\", \"comment\": \"We would like to thank the program chair for spending time reading our manuscript and providing suggestions. We also want to clarify some points raised by the program chair, in order to avoid possible confusions:\\n\\n1. Regarding the program chair\\u2019s comment \\u201cIn the tables, it can be hinted that this might be happening, as about 80% of the cases MCP and RED are indistinguishable in the AUROC values.\\u201d Actually we have already run two statistical tests on the AUROC values, and the results are already summarized in Table 2 in the original manuscript. RED is statistically significantly better than MCP in ~50% of the datasets, not 20% only. Moreover, AUROC is sometimes less informative [1] and not ideal when the positive class and negative class have greatly differing base rates [2] (this happens when the base classifier has high prediction accuracy so we only have few misclassified examples). Since the focus of this work is on error detection, we stated in the paper that we would focus more on AP-error and AUPR-error. In terms of AP-error and AUPR-error, RED indeed significantly outperforms MCP in most datasets.\\n\\n2. Actually we have already added two new types of base classifiers during the rebuttal (please see section 4.2), and the experimental results validate the robustness of our approach (summarized results in Table 3). A deep NN model with a large-scale dataset was also included in section 4.3 in the original manuscript.\\n\\n3. Regarding program chair\\u2019s recommendation to compare with methods in paper: https://arxiv.org/abs/1706.04599, actually we have already cited this paper in section 2, and made extensive discussions about the difference between their work and ours: These methods focus on reducing the difference between reported class probability and true accuracy, and generally the rankings of samples are preserved after calibration. As a result, the separability between correct and incorrect predictions is not improved. In contrast, RED aims at deriving a score that can differentiate incorrect predictions from correct ones better. RED is solving a totally different problem. Directly applying the methods in https://arxiv.org/abs/1706.04599 to error detection is the same as using MCP baseline, which we already included in the current manuscript.\\n\\n4. Regarding the program chair\\u2019s comment \\u201cusing a GP might be an overkill\\u201d, we are using a sparse GP variant called SVGP, which has much lower computational complexity compared to original GP. Moreover, as discussed in the original manuscript, the main reason for using a probabilistic model like GP is to further provide uncertainty information regarding the returned \\u201cconfidence score\\u201d. This can overcome the limitations of original NN classifiers: the standard NNs do not provide any information regarding uncertainty of their inherent confidence scores, resulting in misleading \\u201coverconfident\\u201d predictions. This is critical to the safety and reliability of real-world AI models.\\n\\n[1] Chris Manning and Hinrich Sch\\u00fctze. \\u201cFoundations of Statistical Natural Language Processing\\u201d. MIT Press, 1999.\\n\\n[2] Dan Hendrycks and Kevin Gimpel. \\u201cA baseline for detecting misclassified and out-of-distribution examples in neural networks\\u201d. Proceedings of International Conference on Learning Representations, 2017.\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"In this paper, the authors use a GP classifier to detect if the output of a NN classifier has been decided correctly. The GP takes as input the original input vector x and the output of the NN, i.e. the calibrated posterior probabilities given by the NN. It uses that as an input vector for the GP classifier to decide if the sample was correctly decided. The output of the GP will serve as confidence in the output of the NN. The results are comparable/superior with the state-of-the-art and the authors have repeated the experiments with over 125 different datasets. The reviewers of this paper were all cautiously positive about the paper, but all of them pointed towards the reduced novelty of the paper. Also, none of the reviewers were willing to champion this paper as a must-have at ICLR 2021.\\n \\nFor my reading of the paper, I would tend to agree with the reviewers\\u2019 comments. Also, I find that using the same NN, rather shallow, with the same configuration for all the datasets seems rather limited. Given that this method is independent of the underlying classifier and that the databases used are low dimensions and a low number of training examples, I would have liked to see what a random forest or a GP can accomplish. Also, I would have used bigger NNs that can be trained to overfit the sigmoid outputs for classification of higher accuracy. I believe that having a diversity of underlying classifiers is more relevant than having 125 datasets. We need to find the best classifier or ensemble and then apply the different mechanisms for estimating if the output is the correct one. Otherwise, the proposed method might only be workable for this specific NN configuration. In the tables, it can be hinted that this might be happening, as about 80% of the cases MCP and RED are indistinguishable in the AUROC values.\\n \\nAlso, for all of these datasets a GP could be used as an underlying classifier, and given the premises of this paper, the authors could check how well calibrate a GP classifier is. Also, there has been considerable work on calibrating NNs when they are trained to overfit. Comparing with those methods should be straightforward, as they provide more information than just a confidence score. This is probably the most influential paper: https://arxiv.org/abs/1706.04599 (1000+ references), but there are some recent papers too. \\n \\nFinally, if the goal is to use a GP to detect if the classification done by the NNs is accurate, using a GP might be an overkill, as the complexity of the GP, especially for large datasets might end up being larger than the underlying classifier.\"}",
"{\"title\": \"General response to all reviewers\", \"comment\": [\"We want to thank all the reviewers for their time and effort in evaluating our work. Thank you for your constructive comments. We have carefully read and considered all your suggestions. A common concern by all reviewers is the need for including more baseline approaches. As suggested, we have added five more baseline approaches, and tested all of them on all 125 UCI datasets and CIFAR-10 (when applicable). RED significantly outperforms all these approaches, significantly strengthening the conclusions of the paper. Below is a list briefly summarizing the main revisions:\", \"new experiments\", \"comparison with DNGO [1]\", \"comparison with original SVGP [2]\", \"comparison with entropy of softmax outputs [3]\", \"comparison with MC-dropout [4]\", \"comparison with Bayesian Neural Networks [5]\", \"more comprehensive evaluation on CIFAR-10 datasets with VGG16 models\", \"new discussions\", \"clarifying the purpose and contribution of the approach in section 1\", \"clarifying the intention of the preliminary case study in section 4.4 (previously section 4.3)\", \"suggestion on how to choose the warning threshold in section 5\", \"We have also responded to the specific comments of each reviewer by directly replying to their original reviews.\", \"***\", \"[1] Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Md. Mostofa Ali Patwary, Prabhat Prabhat, and Ryan P. Adams. 2015. \\u201cScalable Bayesian optimization using deep neural networks\\u201d. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37 (ICML'15)\", \"[2] James Hensman, Nicol`o Fusi, and Neil D. Lawrence. \\u201cGaussian processes for big data\\u201d. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, UAI\\u201913\", \"[3] Jacob Steinhardt and Percy Liang. \\u201cUnsupervised risk estimation using only conditional independence structure\\u201d. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS\\u201916\", \"[4] Yarin Gal and Zoubin Ghahramani. \\u201cDropout as a bayesian approximation: Representing model uncertainty in deep learning\\u201d. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML\\u201916\", \"[5] Yeming Wen, Paul Vicol, Jimmy Ba, Dustin Tran, and Roger Grosse. \\u201cFlipout: Efficient pseudo independent weight perturbations on mini-batches\\u201d. In International Conference on Learning Representations, 2018.\"]}",
"{\"title\": \"Responses to Reviewer 3 (2 out of 2)\", \"comment\": \"Comment 4: \\u201cIn the second paragraph of section 4.2., there seems to be a typo when reporting the margin. It is said 0.42 and 0.55 for ConfidNet and RED respectively, but I think it should be 0.042 and 0.055 by looking at Table 3.\\u201d\", \"a4\": \"Thanks for pointing out this typo, it should be 0.042 and 0.055. We have updated the results and descriptions.\\n***\", \"comment_5\": \"\\u201cIt is not entirely clear to me why the process described in section 4.3. (second paragraph) produces proper OOD and adversarial data. For instance, some of the intended OOD data could be similar to training data (specially because the latter is being normalized to mean 0 and std 1). And similarly for the adversarial case. I think this could be better explained.\\u201d\", \"a5\": \"While the main focus of this work is on misclassification detection, This case study is included to highlight a significant and promising avenue for future work. It is a preliminary test of RED on detecting OOD and adversarial data under the most difficult scenaria, i.e., when the OOD data is similar to the original data, and the adversarial samples have almost identical input features with the original data but generate different predictions. The results are promising, suggesting that RED provides a new foundation to this difficult problem, hopefully inspiring further work in this area. We have added a more thorough explanation to section 4.4 (previously section 4.3) in the revised version to clarify this point.\\n***\", \"comment_6\": \"\\u201cWhen it comes to real practice, a key decision is to set a threshold on the confidence score to decide what instances should be supervised by an expert. Is there any recommendation on this?\\u201d\", \"a6\": \"Yes, our suggestion is to use a validation dataset to check how the tradeoffs between precision and recall (in terms of misclassification detection) change over different thresholds. The users can then decide the threshold based on their preference on the precision-recall tradeoff. There is also some literature discussing the threshold choice [4][5][6] (references to it are included in \\u201cRelated Work\\u201d section). This point is discussed more extensively in \\u201cDiscussion and Future Work\\u201d section in the revised version.\\n***\\n[1] Yarin Gal and Zoubin Ghahramani. \\u201cDropout as a bayesian approximation: Representing model uncertainty in deep learning\\u201d. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML\\u201916\\n[2] James Hensman, Nicol`o Fusi, and Neil D. Lawrence. \\u201cGaussian processes for big data\\u201d. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, UAI\\u201913\\n[3] Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Md. Mostofa Ali Patwary, Prabhat Prabhat, and Ryan P. Adams. 2015. \\u201cScalable Bayesian optimization using deep neural networks\\u201d. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37 (ICML'15)\\n[4] Bernard Dubuisson and Mylne Masson. A statistical decision rule with incomplete knowledge about classes. Pattern Recognition, 26(1):155 \\u2013 165, 1993.\\n[5] Carla M. Santos-Pereira and Ana M. Pires. On optimal reject rules and roc curves. Pattern Recogn. Lett., 26(7):943952, May 2005.\\n[6] C. Chow. On optimum recognition error and reject tradeoff. IEEE Trans. Inf. Theor., 16(1):4146, September 2006.\"}",
"{\"title\": \"Responses to Reviewer 3 (1 out of 2)\", \"comment\": \"Thanks for your encouraging comments and constructive suggestions. We have added more baselines as suggested to make the experimental evaluations more comprehensive. Please see our detailed responses below:\\n***\", \"comment_1\": \"\\u201cMy main concern is that the contribution in RED can be regarded somehow incremental given the RIO approach. It utilizes the same rationale behind RIO, and just adapts the necessary components so that it works in classification. The adaptation of these components is also straightforward: the output kernel now works on several dimensions (instead of the scalar dimension of regression) and the target is now the correctness of the original prediction.\\u201d\", \"a1\": \"It is notable that the original RIO is only limited to standard regression problems, but RED extends it to solve an important yet underexplored direction in classification domain: detecting misclassification errors. The main contribution of RED is to capture the connection between these two seemingly unrelated topics (a regression method and error detection in classification). We believe the implementation is natural and compelling, and the experimental results (in the revised version RED is compared to 9 approaches in 100+ datasets) show that the proposed approach indeed works significantly better than existing approaches in this new problem.\\n***\", \"comment_2\": \"\\u201cThe experimental validation focuses on several competitors which can be considered \\\"of the same family\\\" as the proposed approach. Namely, all of them calibrate the predictions of a pre-trained neural network. I think it would be interesting to also compare to a different \\\"family\\\" of methods. For instance, (Functional) Bayesian Neural Networks are meant to obtain calibrated predictions by leveraging epistemic uncertainty (that coming from the model parameters).\\u201d\\n\\nA2. We chose these competitors because they are state-of-the-art in the same problem as RED is intended to solve: providing a quantitative metric to detect misclassification errors of a pre-trained neural network. We agree, however, that including more traditional approaches makes the evaluations more convincing. Therefore, as suggested, we added a comparison to Bayesian Neural Networks (BNN). More specifically, we trained a BNN and applied RED on top of it to see whether RED is able to provide better confidence scores in misclassification detection compared to the internal confidence scores returned by BNN. The comparisons were run on all 125 UCI datasets, and the results show that RED outperforms BNN significantly (see Table 3 in the revised manuscript). In addition, four more approaches were included for comparison, as suggested by other reviewers: entropy of the softmax outputs, MC-dropout [1], original SVGP [2], and DNGO[3]. RED significantly outperforms all of them (see Table 1, 2, 3 and 4 in the revised manuscript), thus strengthening the conclusions. Thanks for the suggestion!\\n***\", \"comment_3\": \"\\u201cI do not fully understand the relevance of the experiment with the large deep learning architecture given by the VGG16 model. Since the proposed method works on the pre-trained neural network, my understanding is that the complexity of the neural network itself is not relevant for the performance of the proposed approach. Also, in this experiment I miss several independent runs to assess the results variability.\\u201d\", \"a3\": \"Indeed this experiment is not strictly necessary given the results on the 125 UCI datasets. The reason it was included was to verify empirically that RED also works well on more complex vision tasks with large, deep architectures. Since the submission we have improved the training pipeline for the VGG16 model (it now achieves state-of-the-art accuracy), and ran 10 independent runs to verify that the results are reliable. Statistical tests were run against the original and newly added baselines (MC-dropout and BNN is independent of VGG16 model, so they are not included in the CIFAR-10 experiment). The results verify that RED significantly outperforms all counterparts (see Table 4 in the revised manuscript). These new results are included in the revision, strengthening the conclusions.\"}",
"{\"title\": \"Responses to Reviewer 4\", \"comment\": \"Thanks for your positive comment. As suggested, we have included more comparisons to other approaches in the revised version. Please see detailed responses below:\\n***\", \"comment_1\": \"\\u201cIt would be good to apply SVGP directly to some of these datasets and compare the results against NN+SVGP results.\\u201d\", \"a1\": \"As suggested, we applied SVGP to all 125 UCI datasets and CIFAR-10. More specifically, SVGP was used to predict whether the original prediction is correct or not. Based on these new experimental results, RED (NN+SVGP together) performs significantly better than SVGP alone in the misclassification detection task (see Table 1, 2 and 4 in the revised manuscript). As suggested by other reviewers, four other comparisons were also included: entropy of the softmax outputs, Bayesian Neural networks, MC-dropout [1], and DNGO[2]. Experimental results on 125 UCI datasets and CIFAR-10 confirms that RED performs significantly better than all of these approaches (see Table 1, 2, 3 and 4 in the revised manuscript), significantly strengthening the conclusions.\\n***\", \"comment_2\": \"\\u201cYou use the term \\u201ccalibrated\\u201d confidence score/prediction. Could you explain what do you mean by calibrated?\\u201d\", \"a2\": \"\\u201cCalibrated\\u201dmeans that RED is applied on top of the internal confidence score returned by the original classifier, e.g., the maximum softmax output. RED thus estimates the residuals between the originally predicted confidence score and target confidence score (1 for correct prediction, 0 for incorrect prediction). After that, RED adds the estimated residual back to the original confidence score, and generates a new confidence score in order to detect misclassifications. This new confidence score returned by RED is the \\u201ccalibrated\\u201d version of the original confidence score. Note that this \\u201ccalibrated\\u201d confidence score is only used for misclassification detection. It does not affect the outputs or prediction accuracy of the original classifier. The introduction section has been revised to clarify this point.\\n***\", \"comment_3\": \"\\u201cI find the presentation of results very confusing. For example, in Table 1, AP-Error is smallest for the RED method and in Table 3 AP-error is the largest for the RED method. In both cases, it is mentioned that the RED method outperforms other methods.\\u201d\", \"a3\": \"The results in Table 1 present the mean rank of the algorithm in 125 UCI datasets, in terms of different metrics like AP-Error, so the smaller the better. The results in Table 3 instead show the absolute values of different metrics like AP-Error in CIFAR-10 dataset, and larger values are better. We have made this concern clear in the revised version.\\n***\", \"comment_4\": \"\\u201cYou mentioned ConfidNet outperformed the MCP baseline by a margin of 0.42. I do not see this number on the table.\\u201d\", \"a4\": \"We are sorry for the typo. It should be 0.042. We have updated the experimental results and descriptions accordingly.\\n***\", \"comment_5\": \"\\u201cIt would be good if the authors could mention in the paper what is RIO short for.\\u201d\", \"a5\": \"Thanks for the suggestion. We have added a note in the revised version to specify that RIO stands for Residual Input/Output.\\n***\", \"comment_6\": \"\\u201cYou mentioned that you need to extend the kernel to multiple output kernel. Could you explain a bit more about that and how you build it?\\u201d\", \"a6\": \"The output kernel in the original RIO model is limited to single-output regression problems. However, for classification problems, the original model usually has multiple outputs, each one corresponding to one class. In RED, this output kernel is extended to multiple outputs of the original classifier. Utilizing information from all outputs should be beneficial in misclassification detection compared to simply considering the single output of the predicted class. To build this kernel, the calculation of covariances (based on GP kernel) is extended from single dimension to multiple dimensions. The feature for output kernel is thus a vector containing multiple softmax outputs (one for each class). A description of this process is included in both the texts and Algorithm1 in section 3.3.\\n***\\n\\n[1] Yarin Gal and Zoubin Ghahramani. \\u201cDropout as a bayesian approximation: Representing model uncertainty in deep learning\\u201d. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML\\u201916\\n[2] Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Md. Mostofa Ali Patwary, Prabhat Prabhat, and Ryan P. Adams. 2015. \\u201cScalable Bayesian optimization using deep neural networks\\u201d. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37 (ICML'15)\"}",
"{\"title\": \"Responses to Reviewer 2 (2 out of 2)\", \"comment\": \"references\\n***\\n[1] Dan Hendrycks and Kevin Gimpel. \\u201cA baseline for detecting misclassified and out-of-distribution examples in neural networks\\u201d. Proceedings of International Conference on Learning Representations, 2017.\\n\\n[2] Jonathan Aigrain and Marcin Detyniecki. \\u201cDetecting adversarial examples and other misclassifications in neural networks by introspection\\u201d. CoRR, abs/1905.09186, 2019.\\n\\n[3] Yarin Gal and Zoubin Ghahramani. \\u201cDropout as a bayesian approximation: Representing model uncertainty in deep learning\\u201d. In Proceedings of the 33rd International Conference on International Conference on Machine Learning - Volume 48, ICML\\u201916\\n\\n[4] James Hensman, Nicol`o Fusi, and Neil D. Lawrence. \\u201cGaussian processes for big data\\u201d. In Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence, UAI\\u201913\\n\\n[5] Jasper Snoek, Oren Rippel, Kevin Swersky, Ryan Kiros, Nadathur Satish, Narayanan Sundaram, Md. Mostofa Ali Patwary, Prabhat Prabhat, and Ryan P. Adams. 2015. \\u201cScalable Bayesian optimization using deep neural networks\\u201d. In Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37 (ICML'15)\"}",
"{\"title\": \"Responses to Reviewer 2 (1 out of 2)\", \"comment\": \"Thanks for your constructive suggestions. We have added experimental comparisons to several more approaches in the revised version. Please see detailed responses below:\\n***\", \"comment_1\": \"\\u201cHowever, I question whether the baselines are sufficient; it is not demonstrated whether RED would outperform other confidence scoring and OOD detection methods mentioned in the related work section, such as temperature scaling (or the related method ODIN, proposed in Liang, S., Li, Y., and Srikant, R., 2017. Enhancing the reliability of out-of-distribution image detection in neural networks.) or simply the entropy of the softmax predictions. Unless there is a good justification for the limited set of baselines, I believe the paper's claims to generality are limited.\\u201d\", \"a1\": \"It is important to clarify that the focus of this work is on misclassification detection (an underexplored [1] and challenging [2] new area), instead of OOD or adversarial detection. We have considered all the approaches mentioned in the related work section from this perspective, however, most of them do not apply to the misclassification detection problem. Taking temperature scaling as an example, the main idea is to scale all the logit outputs by a scalar T, and the same T is applied to all predictions. As a result, the relative ranking of the predictions are still preserved after re-scaling, so it makes no difference for misclassification detection compared to using original softmax outputs (the MCP baseline in our experiments). It is notable that approaches like temperature scaling focus on reducing the difference between reported class probability and true accuracy, and the separability between correct and incorrect predictions is not improved. In contrast, RED aims at deriving a score that can differentiate incorrect predictions from correct ones. This point is emphasized in the \\u201crelated work\\u201d section. Similarly, ODIN is particularly designed for OOD detection in image tasks, which is a different problem from misclassification detection, as is now clarified in the related work section.\\n\\nHowever, to put the results in context, we added the entropy of the softmax predictions as a baseline (it was not originally included because according to literature [1], its performance is similar to maximum class probability baseline, which was already included). Experiments on 125 UCI datasets and CIFAR-10 show that RED significantly outperforms the entropy baseline (see Table 1, 2 and 4 in the revised manuscript). In addition, four more baselines were included for comparison as suggested by other reviewers: Bayesian Neural Networks, MC-dropout [3], original SVGP [4], and DNGO[5]. According to experimental results (see Table 1, 2, 3 and 4 in the revised manuscript), RED performs significantly better than all of them, which, we believe convincingly demonstrates the value of the approach.\\n***\", \"comment_2\": \"\\u201cAdditionally, for the OOD detection results shown in Figure 3, why were AUROC and AUPRC not reported? While the scatterplots show separability of OOD data visually, these metrics (used elsewhere in the paper) would give a better indication of performance (and again, I think a greater range of baselines and tasks would be necessary to make any firm claims about OOD detection).\\u201d\", \"a2\": \"Because the focus of the paper is on misclassification detection, the case study in section 4.4 (previously section 4.3) is not yet intended to make substantial claims about the superiority of RED in detecting OOD examples. However, we decided to show the results in Figure 3 because this preliminary finding shows an intriguing possibility for future work: Since RED provides both the mean and variance of the confidence scores, it is possible to construct a 2-dimensional space for error detection (as shown in Figure 3). This space is different from the 1-dimensional detection space in traditional approaches, which only provide a single number for the confidence score. With this new 2D space, it is possible to not only detect the errors, but also differentiate between different types of errors, i.e. separate correct, incorrect (misclassification), OOD, and adversarial samples. Traditional metrics like AUROC and AUPRC are only designed for binary classification problems, but this classification problem has four classes in total. Therefore, AUROC and AUPRC cannot be directly applied, and new metrics will need to be developed for this new domain (differentiating different types of errors). This case study is intended to show the potential of further extending RED to broader error detection tasks, and we hope it can inspire other researchers in their future work. We have placed this case study at the end of the experiments section to avoid distraction from the main topic, and included discussion to clarify its purpose. We also included discussions in the future work section to point out this new direction.\"}",
"{\"title\": \"Responses to Reviewer1\", \"comment\": \"Thank you for constructive suggestions. We have carefully considered your comments, and added the suggested experiments. Please see detailed responses below:\\n***\", \"comment_1\": \"\\u201cIn this paper, their goal is to improve calibration and accuracy by augmenting a classification model with a GP.\\u201d\", \"a1\": \"We would like to emphasize that the proposed method (RED) does not change the prediction accuracy of the original classification model. Instead, RED is a supporting tool that can provide a quantitative metric for detecting misclassification errors of the original classification model. We have revised the Introduction section to make this point clear.\\n***\", \"comment_2\": \"\\u201cThey propose a model, RED, which instead tries to predict the residual between the predicted confidence score for the true class and 1 \\u2014 the true class target confidence score using a GP.\\u201d\", \"a2\": \"Actually, \\u201cthe predicted confidence score for the true class\\u201d should be \\u201cthe predicted confidence score for the predicted class\\u201d; this score corresponds to the maximum class probability returned by the original classification model (the predicted class may not necessarily be the true class).\\n***\", \"comment_3\": \"\\u201cMy main concern is that I think some additional methods need to be compared with. For example [1] uses a bayesian last layer which is something that should be compared with. Using an ensemble of single layer NNs for the last layer or using MC-dropout at test time (which is known to approximate Bayesian inference under certain conditions) would also be interesting.\\u201d\", \"a3\": \"Thanks for the suggestion---we added several new comparisons in the revised paper, and they strengthen the conclusions significantly. First, a comparison with the approach in your reference [1] was included. Although the original approach [1] models the surrogate function in Bayesian Optimization setup, we managed to extend it to error detection problems by adding a Bayesian linear regression layer after the logits layer of the original classification model to predict whether an original prediction is correct or not. This approach was tested over all 125 UCI datasets and CIFAR-10 (with VGG-16 architecture). RED outperforms this approach by a significant margin (see Table 1, 2 and 4 in the revised manuscript). Second, a comparison with the MC-dropout approach [2] was included. More specifically, the original standard NN classifier (without dropout layer) was replaced with an NN classifier (adding dropout layers after each hidden layer) with dropout running in both train and test time. RED was then applied on top of the MC-dropout NN classifiers to see whether RED is able to provide better performance in error detection. Experiments were again run on all 125 UCI datasets (the modified MC-dropout NN classifiers does not directly apply to CIFAR-10), and again RED significantly outperformed MC-dropout (see Table 3 in the revised manuscript). Third, comparisons were added with Bayesian Neural Networks, original SVGP, and entropy baseline as suggested by other reviewers. Please check Table 1, 2, 3, 4 in the revised manuscript for the newly added results. Based on the experiments, RED performs significantly better than all of them, supporting the conclusions of the paper strongly.\\n***\", \"comment_4\": \"\\u201cI find the approach interesting though the novelty is incremental over the RIO paper.\\u201d\", \"a4\": \"Actually, the original RIO approach applies only to standard regression problems; it cannot be directly applied to classification problems. Moreover, detecting misclassification errors is a distinctly different problem from improving the accuracy/calibrating the predictions, on which most previous works have been done. This problem is still underexplored [3] and challenging [4], yet it is critical for improving AI safety in real-world applications. Our main innovation is to capture the connection between a method for regression (RIO) and a new problem in classification, and successfully extend the method to this new problem. The experimental results (now compared with 9 approaches in 100+ datasets) show that the RED indeed works significantly better than state-of-the-art approaches. We have revised the introduction to make this framing clear.\\n***\\n[1] \\u201cScalable Bayesian Optimization Using Deep Neural Networks\\u201d by Snoek et al. \\n[2] \\u201cDropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning\\u201d by Gal et al.\\n[3] Dan Hendrycks and Kevin Gimpel. \\u201cA baseline for detecting misclassified and out-of-distribution examples in neural networks\\u201d. Proceedings of International Conference on Learning Representations, 2017.\\n[4] Jonathan Aigrain and Marcin Detyniecki. \\u201cDetecting adversarial examples and other misclassifications in neural networks by introspection\\u201d. CoRR, abs/1905.09186, 2019.\"}",
"{\"title\": \"Key comparison methods missing\", \"review\": \"In this paper, their goal is to improve calibration and accuracy by augmenting a classification model with a GP. They base their model off RIO (ICLR 2020) which targets regression problems and tries to predict the residual between predicted value and true value. They propose a model, RED, which instead tries to predict the residual between the predicted confidence score for the true class and 1 \\u2014 the true class target confidence score using a GP. They show strong improvements over the methods they compare to for 125 UCI datasets and CIFAR-10 dataset.\\n\\nI find the approach interesting though the novelty is incremental over the RIO paper. My main concern is that I think some additional methods need to be compared with. For example [1] uses a bayesian last layer which is something that should be compared with. Using an ensemble of single layer NNs for the last layer or using MC-dropout at test time (which is known to approximate Bayesian inference under certain conditions) would also be interesting.\\n\\n[1] \\u201cScalable Bayesian Optimization Using Deep Neural Networks\\u201d by Snoek et al. \\n[2] \\u201cDropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning\\u201d by Gal et al.\", \"edit\": \"Based on the author response in terms of adding additional experiments, I'm raising my score to a 6.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Well-performing, simple to implement method for classification error detection; limited set of baselines may not establish generality\", \"review\": \"Update: Following the authors' clarifications and additional experimental work, I'm increasing my rating to 6.\\n\\nThis paper proposes RED, a framework for detecting misclassification errors, based on regression of target confidence scores and application of a Gaussian process for uncertainty in predicted confidence scores. It builds upon RIO, a framework for predicting residuals of regression models and their uncertainties using GPs. Compared with other confidence metrics, RED aims for greater separability between correct and incorrect predictions.\\n\\nThe method is straightforward to implement and performs well against the baselines considered on classification tasks for 125 UCI datasets. However, I question whether the baselines are sufficient; it is not demonstrated whether RED would outperform other confidence scoring and OOD detection methods mentioned in the related work section, such as temperature scaling (or the related method ODIN, proposed in Liang, S., Li, Y., and Srikant, R., 2017. Enhancing the reliability of out-of-distribution image detection in neural networks.) or simply the entropy of the softmax predictions. Unless there is a good justification for the limited set of baselines, I believe the paper's claims to generality are limited.\\n\\nAdditionally, for the OOD detection results shown in Figure 3, why were AUROC and AUPRC not reported? While the scatterplots show separability of OOD data visually, these metrics (used elsewhere in the paper) would give a better indication of performance (and again, I think a greater range of baselines and tasks would be necessary to make any firm claims about OOD detection).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Adding confidence score to NN classifiers without retraining or modifying the model\", \"review\": \"This paper solves an interesting problem of predicting uncertainty in NN without re-raining/modifying the existing NN. The authors propose a framework to calculate a confidence score for detecting misclassification errors by calibrating the NN classifier\\u2019s confidence scores and estimates uncertainty around the calibrated scores using Gaussian processes. This framework is called RED (Residual i/o Error Detection).\\n\\nThis paper is also technically sound and to the best of my knowledge is novel and relevant to the community. \\n\\nIt would be good to apply SVGP directly to some of these datasets and compare the results against NN+SVGP results.\\n\\nYou use the term \\u201ccalibrated\\u201d confidence score/prediction. Could you explain what do you mean by calibrated?\\n\\nI find the presentation of results very confusing. For example, in Table 1, AP-Error is smallest for the RED method and in Table 3 AP-error is the largest for the RED method. In both cases, it is mentioned that the RED method outperforms other methods. \\n\\nYou mentioned ConfidNet outperformed the MCP baseline by a margin of 0.42. I do not see this number on the table.\\n\\nIt would be good if the authors could mention in the paper what is RIO short for.\\n\\nYou mentioned that you need to extend the kernel to multiple output kernel. Could you explain a bit more about that and how you build it?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Interesting problem and results, although the approach seems a bit incremental\", \"review\": \"#######################################################\\nSUMMARY\\n\\nThis paper introduces RED, a new methodology to produce reliable confidence scores to detect missclassification errors in neural networks. The idea is to combine kernels based on both input and output spaces (as in RIO) to define a (sparse) GP that estimates the residual between the correctness of the original prediction and the maximum class probability. The authors show enhanced performance against other related methods and the ability of RED to detect OOD and adversarial data through the variance of the confidence score. \\n\\n#####################################################\\nPROS\\n\\n1) Obtaining confidence scores for neural network predictions is a timely and very relevant topic for the ICLR community, since it is one of the main limitations of real-world applications of current neural nets.\\n\\n2) The related literature review is clear and, to the best of my knowledge, the proposed metholody based on Gaussian Processes is novel. \\n\\n3) The experimental validation of the proposed method on the UCI datasets is strong. It uses a wide range of datasets and several statistical tests, and RED obtains superior performance.\\n\\n4) The idea of using the variance of the proposed confidence score to identify OOD and adversarial data is interesting and promising.\\n\\n######################################################\\nCONS\\n\\n1) My main concern is that the contribution in RED can be regarded somehow incremental given the RIO approach. It utilizes the same rationale behind RIO, and just adapts the necessary components so that it works in classification. The adaptation of these components is also straightforward: the output kernel now works on several dimensions (instead of the scalar dimension of regression) and the target is now the correctness of the original prediction. \\n\\n2) The experimental validation focuses on several competitors which can be considered \\\"of the same family\\\" as the proposed approach. Namely, all of them calibrate the predictions of a pre-trained neural network. I think it would be interesting to also compare to a different \\\"family\\\" of methods. For instance, (Functional) Bayesian Neural Networks are meant to obtain calibrated predictions by leveraging epistemic uncertainty (that coming from the model parameters). \\n\\n3) I do not fully understand the relevance of the experiment with the large deep learning architecture given by the VGG16 model. Since the proposed method works on the pre-trained neural network, my understanding is that the complexity of the neural network itself is not relevant for the performance of the proposed approach. Also, in this experiment I miss several independent runs to assess the results variability. \\n\\n####################################\\nAdditional questions/feedback:\\n\\n1) In the second paragraph of section 4.2., there seems to be a typo when reporting the margin. It is said 0.42 and 0.55 for ConfidNet and RED respectively, but I think it should be 0.042 and 0.055 by looking at Table 3.\\n\\n2) It is not entirely clear to me why the process described in section 4.3. (second paragraph) produces proper OOD and adversarial data. For instance, some of the intended OOD data could be similar to training data (specially because the latter is being normalized to mean 0 and std 1). And similarly for the adversarial case. I think this could be better explained.\\n\\n3) When it comes to real practice, a key decision is to set a threshold on the confidence score to decide what instances should be supervised by an expert. Is there any recommendation on this?\\n\\n####################################### \\nAFTER REBUTTAL\\n\\nThe new baselines added make the experimental validation more convincing. Therefore, I have raised my rating to 6 (Marginally above the acceptance threshold). However, I still believe that the contribution is incremental, and I think the paper would gain in terms of novelty if it focused more on the detection of OOD data and adversarial attacks (which right now is more like a preliminary test).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
8xeBUgD8u9 | Continual learning in recurrent neural networks | [
"Benjamin Ehret",
"Christian Henning",
"Maria Cervera",
"Alexander Meulemans",
"Johannes Von Oswald",
"Benjamin F Grewe"
] | While a diverse collection of continual learning (CL) methods has been proposed to prevent catastrophic forgetting, a thorough investigation of their effectiveness for processing sequential data with recurrent neural networks (RNNs) is lacking. Here, we provide the first comprehensive evaluation of established CL methods on a variety of sequential data benchmarks. Specifically, we shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs. In contrast to feedforward networks, RNNs iteratively reuse a shared set of weights and require working memory to process input samples. We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements, which lead to an increased need for stability at the cost of decreased plasticity for learning subsequent tasks. We additionally provide theoretical arguments supporting this interpretation by studying linear RNNs. Our study shows that established CL methods can be successfully ported to the recurrent case, and that a recent regularization approach based on hypernetworks outperforms weight-importance methods, thus emerging as a promising candidate for CL in RNNs. Overall, we provide insights on the differences between CL in feedforward networks and RNNs, while guiding towards effective solutions to tackle CL on sequential data. | [
"Recurrent Neural Networks",
"Continual Learning"
] | Accept (Poster) | https://openreview.net/pdf?id=8xeBUgD8u9 | https://openreview.net/forum?id=8xeBUgD8u9 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"MvKiSdBgh6l",
"aIBORe2xpPj",
"STMFQzmD9g0",
"_si-54gYUFS",
"FJ921npMm1W",
"pDxMuv5UwjV",
"VfWl6R-fUXg",
"2x05K3R-8TX",
"RPpv2sMbfTx",
"yvG2fdeCsSO"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1615363187220,
1610040461890,
1605900405866,
1605900273446,
1605900160452,
1605900126152,
1605899711056,
1603873969671,
1603861825922,
1603813681438
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Paper3663/Authors"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3663/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3663/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3663/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3663/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3663/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3663/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3663/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3663/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Reply to Program Chairs\", \"comment\": \"We thank the chair for the very encouraging comments.\", \"to_address_the_valid_concern\": \"EWC only requires the diagonal elements of the Fisher Information matrix, which is why we do not need to compute the full outer product. What is however important is the correct specification of the negative-log-likelihood (NLL), which has to take the sequential nature of the problem into account as described in SM eq. 6 - 8. For clarity, we use the empirical Fisher, which is computed by averaging the following term across the training dataset $\\\\bigg( \\\\frac{\\\\partial \\\\text{NLL}_n}{\\\\partial \\\\psi_i} \\\\bigg)^2$, where $\\\\text{NLL}_n$ is the NLL computed for each sample in the dataset separately.\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"I agree with the reviewers, and I find the careful analysis of CL approaches relying on regularization for RNN useful and insightful. I do feel that a lot of the interesting content is still in the appendix (from a quick skim and looking at the plots in the appendix) but I think something like this can potentially be unavoidable.\\n\\nI do like the separation between sequence length and memory requirements. I think making observations about different types of recurrent architectures is hard, but I think the paper does a good job to raise some interesting questions. \\n\\nA note that I would make (that I haven't seen raised through a quick look in the paper) is that is not clear how the Fisher Information Matrix should be computed in case of a recurrent model (which is a problem in general). E.g. a typical thing is to compute it as for a feed-forward model (using the gradients coming from BPTT) which is feasible computationally, but actually that is problematic as you first sum gradients before taking their outer-product rather than summing the outer-products corresponding of the different terms in the gradient. I'm wondering if that plays a role here as well.\\n\\nOverall I think the paper does careful analysis and ablation studies and raises some interesting observation of how one should approach CL algorithms for RNN models.\"}",
"{\"title\": \"Reply to AnonReviewer4\", \"comment\": \"We thank you for the constructive feedback and for the appreciation of our work. We have tried to address all your concerns, which we outline below.\\n\\n - *Provides on the other hand less algorithmic innovation. In particular, it focuses on methods related to (Oswald et al., 2020), a paper that was accidentally omitted from the reference list, but apparently is this ICLR 2020 paper that contains related material: von Oswald, J., Henning, C., Sacramento, J., & Grewe, B. F. (2019). Continual learning with hypernetworks. arXiv preprint arXiv:1906.00695. ICLR 2020.*\\n\\nWe may misunderstand the raised concern, but the paper of von Oswald et al. which was first published on arXiv in 2019 is identical to the version that was peer-reviewed and accepted at ICLR in 2020. Therefore we only cited the ICLR version. \\n\\n - *I am uncertain about the generalizability of results that were demonstrated for the chosen benchmark tasks. In particular, the conceptually important distinction between challenges arising from working memory load and sequence length is tested by variations of the copy task with padded inputs, where relevant and irrelevant input bits are distinguished in a very simple way that is hardly met by real-world scenarios.*\\n\\nWe thank AnonReviewer4 for this comment. Indeed, these two factors can be independently controlled in the Copy Task, but they are often entangled in real-world scenarios. To verify whether our observations on this synthetic task hold for more complex scenarios, we explored three real-world datasets and linked these to our original analysis whenever possible. In particular, the SSMNIST experiments confirm that an increase in working memory requirements (due to an increase in the number of digits per task) correlates with a significant drop in performance for weight-importance methods, but not for the hypernetwork approach. However, because increasing the number of digits per task also leads to an increase in the weight reuse, it is indeed not clear whether weight reuse is not the factor affecting weight-importance methods. To control for this, we now complement our analyses with an additional SSMNIST experiment, where sequence length is increased without a concomitant increase in working memory (SM G.8). Rather than using zero-padding, we achieve this by upsampling the original stroke sequences, i.e. by increasing temporal resolution and thereby adding redundant information. Consistent with our interpretation of the Copy Task results, these new experiments confirm for a real-world task that a sole increase in the sequence length doesn\\u2019t lead to a drop in performance of weight-importance methods.\\n\\n - *There may also be differences arising from different types of RNNs, and it is not clear to me to what extent one can make conclusions about all of them by testing on just one type.*\\n\\nWe understand the concern raised by AnonReviewer4 regarding the generalizability of our results to other types of RNNs. For this reason, we repeated the Copy Task analyses usingLSTMs instead of vanilla RNNs (SM G.4). These analyses show that, also in LSTMs, i) the dimensionality of the hidden space increases with higher working memory requirements (increasing pattern length) but not with a mere increase in weight reuse, and that ii) this increase correlates with an increase in importance values. This is consistent with the results obtained for vanilla RNNs, and highlights the generality of our results across RNN architectures.\"}",
"{\"title\": \"Reply to AnonReviewer2\", \"comment\": \"We thank you for the careful review and the overall positive feedback. Below, we reply to all the raised concerns point-by-point.\\n\\n - *It would be interesting to see results on more realistic tasks, like sentence classification.*\\n\\nWe agree that it would be interesting to show results on a wider diversity of problems related to sequential processing. However, because a thorough investigation of all considered methods requires immense computational resources, we were forced to carefully select which datasets to consider. The three real-world datasets we explored were selected to ensure diversity in the type of tasks (i.e. classifying entire sequences as in Audioset or SSMNIST vs. assigning one label per timestep as in PoS tagging) and in the input domains (i.e. sound in Audioset, images in SSMNIST and text in PoS tagging). Therefore, our results include both a realistic NLP setting (the Part-of-Speech tagging task presented in SM G.9) where a single model needs to learn to tag sentences from a different language in each task, and a realistic sequence classification task (Audioset), performed on audio snippets rather than text. As the general conclusions from those experiments coincide, they are likely to be transferable to related scenarios such as sentence classification.\\n\\n - *The conclusion on working memory requirement didn\\u2019t consider the possibility of knowledge sharing between tasks. For example, two complicated tasks may share a common sub-network that is essential for solving both tasks. Such that, in an ideal situation, the model doesn't need to allocate large amounts of extra resources to learn the second task. It would be interesting to see how different CL methods can reuse knowledge learnt from previous tasks.*\\n\\nWe thank AnonReviewer2 for this interesting comment. Indeed, the scenario we consider for the theoretical analysis is an extreme case and, most commonly, real-world tasks benefit from some form of knowledge sharing. In fact, in linear RNNs, it is easy to see that whenever the subspaces associated with individual tasks may overlap, the overall dimensionality of the used hidden space can be less than the sum of dimensionalities of individual task-related subspaces, thus freeing up capacity in the recurrent weights to learn new tasks. In other words, together with the working memory of individual tasks and the number of tasks, task similarity will also play a role in the effectiveness of weight-importance methods for RNNs. To clarify this point, we added a comment to the linear RNN analysis section in SM C. Furthermore, inspired by this comment, we added a section (SM G.10) where we illustrate the fact that both weight-importance methods and the hypernetwork approach can benefit from forward transfer of knowledge when sequentially learning tasks, while highlighting that the mechanisms for doing so differ between the two approaches.. Finally, we added a concurrent study to our related work section which addresses the complementary question of how to learn a set of weights that optimally allocates subspaces across tasks to allow transfer while preventing forgetting (Duncker et al. 2020, NeurIPS).\"}",
"{\"title\": \"Reply to AnonReviewer1 (2/2)\", \"comment\": \"- *An analysis of why hypernetworks perform better would be interesting. So would have been some proposals for methods designed specifically for CL with RNNs.*\\n\\nWe thank the reviewer for this valuable comment. In the revised manuscript we report new complementary analyses that address these concerns.\\n\\n**Elaborating on HNET\\u2019s performance.** To thoroughly discuss why the HNET method performs better than other regularization approaches we added Sec. G.11 to the SM. There, we discuss in more detail why we consider HNET a suitable approach for CL in RNNs, and more generally, why it is an intriguing approach for CL. In addition, we more explicitly stress the fundamental differences with respect to weight-importance methods and, inspired by the reviewers\\u2019 request, examine solutions obtained with each of the two approaches. For this, we quantify the viability of the solutions and find that the HNET approach has the ability to find solutions in flatter regions of the loss landscape. This is very relevant in a CL setting, because such solutions, that tend to generalize better, will be more robust to perturbations introduced when learning new tasks. In turn, lower levels of regularization will be required to maintain previous knowledge, leading to increased plasticity for learning new tasks. Nevertheless, our empirical analysis reveals that the current HNET approach does not necessarily succeed in finding such solutions, and therefore opens interesting avenues for actively guiding towards flat minima in future work.\\n\\n**Elaborating on methods designed specifically for CL in RNNs.** While we do not propose new CL methods, our paper will guide future work in this area by providing a first thorough investigation of the strengths and weaknesses of established CL methods applied to RNNs. Besides fairly acquired baselines, we also provide key insights that can direct the development of CL methods tailored for RNNs. We summarize two major discussion points below. \\n\\n - While this work focuses solely on CL methods that operate on RNNs with a static set of weights, it is an intriguing question whether more challenging practical scenarios can benefit from time-dependent processing (i.e. through the use of a different set of weights per timestep). As suggested by prior work, recurrent hypernetworks allow time-dependent processing in RNNs via time-specific weights. It is therefore a valuable insight to realize that the HNET approach can just as readily be applied to this setting by protecting the static set of weights maintained within the recurrent hypernetwork. Because the CL regularizer is agnostic to the more challenging architectural setup, one can expect it to perform similarly well.\\n\\n - As you pointed out, we mention in our discussion that making the recurrent processing of weight-importance methods task-conditioned (e.g. by providing task identity as an additional input) could vastly improve their performance. The reason behind this is that the need to solve all tasks simultaneously within the hidden space would be overcome, and the amount of working memory that could be allocated to each task could be increased. \\n\\nWe hope that this type of insight can inspire future work and that our provided baselines and code will facilitate the development of more tailored CL approaches for RNNs.\\n\\n - *The main text doesn\\u2019t clearly mention that vanilla RNNs are used in section 5.1*\\n\\nThank you for pointing this out; the text now specifically mentions that vanilla RNNs are used in Sec. 5.1.\"}",
"{\"title\": \"Reply to AnonReviewer1 (1/2)\", \"comment\": \"We thank you for your encouraging assessment and the valuable feedback. Below we describe the individual changes concerning your remarks point-by-point.\\n\\n - *The motivation and conclusions from the ssMNIST task is not very evident and the tasks doesn't seem to make a clear point.*\\n \\nIn brief, this benchmark allows us to investigate the effect of increasing working memory requirements in a real-world dataset. Indeed, the amount of information to be stored and manipulated per sample can be directly controlled by the number of digits per input sequence. Furthermore, in contrast to the Copy Task variants (which were designed to closely match the theoretical assumptions of the linear RNN analysis), this dataset allows investigating a scenario where task identity can be inferred from the input alone. Intriguingly, our results show that weight-importance methods are disproportionately affected by task complexity (compared to, for instance, the HNET approach). This shows that, although weight-importance methods could in theory perform task-conditional computation, in practice they cannot leverage this information efficiently. In the revised manuscript, we have rephrased the SSMNIST section such that the reason for integrating these experiments becomes more clear. \\n\\n - *Readability of some parts of the paper depend very heavily on the supplement, and the paper itself doesn't stand by itself. For example, the linear RNN analysis, description of some tasks (I had to read the supplement to actually understand the permuted copy and the pattern manipulation tasks). As an aside, I hope the authors submit an extended version to an appropriate venue, because I think many of the results and discussions relegated to the supplement seem quite interesting and relevant for the community (e.g. the task-conditional processing).*\\n\\nWe found AnonReviewer1\\u2019s enthusiasm regarding the supplementary insights and discussions encouraging and take the comment on readability very seriously. To improve readability, we expanded the paragraph that summarizes the analysis on linear RNNs (Sec. 4), such that the most important aspects necessary for understanding the analysis are covered. Furthermore, we made the description of the Copy Task variants more accessible i) by rephrasing the description such that these can be understood without having to refer to the SM, ii) by adding a schematic that illustrates the different Copy Task variants in the main text (Fig. 2), and iii) by elaborating on the schematic in the SM (Fig. S1) with a complete description of the inputs and outputs corresponding to each of the variants. Overall, these changes have made the main text more self-contained, while interesting pointers to further details and control experiments which we do not deem necessary for the main story of the paper remain in the SM.\"}",
"{\"title\": \"General response to all reviewers\", \"comment\": \"We thank the reviewers for the evaluation of our manuscript and for the overall positive assessment. We are grateful for the constructive feedback, which allowed us to substantially improve the paper. Below we briefly list how we addressed the main concerns and later provide further details in a point-by-point reply.\\n\\n**First,** to address one of the main concerns raised by AnonReviewer4, we now show that our results from the theoretical analysis and the Copy Task generalize to real-world datasets and other RNN architectures. Specifically, to improve the relationship between these results and real-world problems, we conducted an additional SSMNIST experiment where, similarly to the Padded Copy Task, sequence lengths increase while working memory requirements remain fixed. Consistent with the Copy Task results, the performance of weight-importance methods is not affected by a mere increase in weight reuse (results in SM G.8). Encouraged by AnonReviewer4, we also investigated whether our Copy Task results generalize to LSTMs. For this, we repeated the Copy Task analyses and confirmed that the observed trends hold for this type of network.\\n\\n**Second,** to address an important comment from AnonReviewer1, we provide a more detailed discussion and have performed additional analyses to gain novel insights into the factors leading to the superior performance of the hypernetwork approach as compared to weight-importance methods (SM G.11). Also, as suggested by AnonReviewer2, we now explicitly consider the case of task similarity and forward transfer (results in SM G.10). For this, we performed additional experiments that illustrate that both weight-importance methods and the hypernetwork approach can benefit from knowledge transfer when tasks are similar.\\n\\n**Third,** as suggested by AnonReviewer1, we took a serious effort to improve the readability of the main text by elaborating explicitly on content that was initially relegated to the supplementary material. These improvements include, for example, adding a small figure with typical Copy Task patterns, which will make it easier for readers to understand the different variants of this dataset (Fig. 2). Furthermore, we extended the paragraph discussing the analysis of linear RNNs (Sec. 4), such that the logic and intuition can be understood without having to refer to the SM. \\n\\n\\nGiven the new experiments and better readability, which considerably improved the manuscript, we kindly welcome all reviewers to reassess our paper and to re-evaluate their rating if they agree with the improvements. We are open to any suggestions that may further improve our manuscript and remain available to answer any question to support the reviewers in their assessment process.\"}",
"{\"title\": \"An interesting and timely analysis of CL for RNNs\", \"review\": \"Summary:\\n\\nThe authors do an evaluation of the application of weight-importance continual learning methods to recurrent neural networks (RNNs). They draw out the tradeoff between complexity of precessing and just remembering (working memory) in terms of the applicability of these weight importance methods. They also provide some theoretical interpretation based on stying linear RNNs.\\n\\nOverall, I vote for accepting this paper because the work is well-motivated, thorough, and provides useful insights. My major concerns are listed below.\", \"strengths\": [\"The paper is very well written, and the motivation, methods and inferences are quite clearly described. The main question the authors are considering is very clear.\", \"The results around use of existing continual learning methods to RNNs is timely and relevant.\", \"The insight into the tradeoff between complexity of processing and working memory requirements and its effect on the ability of the network to continually learn is very interesting. Similarly the fact that hypernetwork based approaches work better than other approaches most of the time is useful.\", \"The analysis of the above tradeoff using a linear RNN is also interesting since it provides a nice intuition for why the tradeoff exists.\"], \"weaknesses\": [\"The motivation and conclusions from the ssMNIST task is not very evident and the tasks doesn't seem to make a clear point.\", \"Readability of some parts of the paper depend very heavily on the supplement, and the paper itself doesn't stand by itself. For example, the linear RNN analysis, description of some tasks (I had to read the supplement to actually understand the permuted copy and the pattern manipulation tasks). As an aside, I hope the authors submit an extended version to an appropriate venue, because I think many of the results and discussions relegated to the supplement seem quite interesting and relevant for the community (e.g. the task-conditional processing).\", \"An analysis of why hypernetworks perform better would be interesting. So would have been some proposals for methods designed specifically for CL with RNNs.\"], \"minor\": \"The main text doesn't clearly mention that vanilla RNNs are used in section 5.1\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Extensive study and convincing results\", \"review\": \"This paper provides a systematic evaluation of the performance of different CL methods on RNN. The study suggests that high working memory requirements increase difficulty of learning new tasks, while the average length of input sequence is not strictly related to the difficulty of learning new tasks. The author proposes to overcome this problem by using a hypernetwork-based CL approach, which shows promising results in the experiments.\", \"strength\": [\"The paper provides extensive study to compare different continual learning methods.\", \"The conclusion is well supported by analysis of intrinsic dimension and performance on different tasks.\", \"The paper is well written and easy to follow\", \"Weaknesses\", \"It would be interesting to see results on more realistic tasks, like sentence classification.\", \"The conclusion on working memory requirement didn\\u2019t consider the possibility of knowledge sharing between tasks. For example, two complicated tasks may share a common sub-network that is essential for solving both tasks. Such that, in an ideal situation, the model doesn't need to allocate large amounts of extra resources to learn the second task. It would be interesting to see how different CL methods can reuse knowledge learnt from previous tasks.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This review on continual learning for RNNs provides a very valuable service to the community.\", \"review\": \"Pros: Most work on continual learning addresses only feedforward networks. This paper\\nprovides apparently the first systematic discussion and comparison of CL methods for RNNs\\nThereby it provides an important service to the community. The material is presented thoroughly in the Suppl.\", \"cons\": \"Provides on the other hand less algorithmic innovation. In particular, it focuses on methods related to (Oswald et al., 2020), a paper that was accidentally omitted from the reference list, but apparently is this ICLR 2020 paper that contains related material:\\nvon Oswald, J., Henning, C., Sacramento, J., & Grewe, B. F. (2019). Continual learning with hypernetworks. arXiv preprint arXiv:1906.00695. ICLR 2020.\\n\\nI am uncertain about the generalizability of results that were demonstrated for the chosen benchmark tasks. In particular, the conceptually important distinction between challenges arising from working memory load and sequence length is tested by variations of the copy task with padded inputs, where relevant and irrelevant input bits are distinguished in a very simple way that is hardly met by real-world scenarios. \\n\\nThere may also be differences arising from different types of RNNs, and it is not clear to me to what extent one can make conclusions about all of them by testing on just one type.\\n\\nI tend to vote for accept.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
8mVSD0ETOXl | Prediction of Enzyme Specificity using Protein Graph Convolutional Neural Networks | [
"Changpeng Lu",
"Samuel Z Stentz",
"Joseph H Lubin",
"Sijian Wang",
"Sagar D Khare"
] | Specific molecular recognition by proteins, for example, protease enzymes, is critical for maintaining the robustness of key life processes. The substrate specificity landscape of a protease enzyme comprises the set of all sequence motifs that are recognized/cut, or just as importantly, not recognized/cut by the enzyme. Current methods for predicting protease specificity landscapes rely on learning sequence patterns in experimentally derived data with a single enzyme, but are not robust to even small mutational changes. A comprehensive evaluation of specificity requires consideration of the three-dimensional structure and energetics of molecular interactions. In this work, we present a protein graph convolutional neural network (PGCN), which uses a physically intuitive, structure-based molecular interaction graph generated using the Rosetta energy function that describes the topology and energetic features, to determine substrate specificity. We use the PGCN to recapitulate and predict the specificity of the NS3/4 protease from the Hepatitic C virus. We compare our PGCN with previously used machine learning models and show that its performance in classification tasks is equivalent or better. Because PGCN is based on physical interactions, it is inherently more interpretable; determination of feature importance reveals key sub-graph patterns responsible for molecular recognition that are biochemically reasonable. The PGCN model also readily lends itself to the design of novel enzymes with tailored specificity against disease targets. | [
"graph convolutional neural networks",
"protease specificity",
"proteins",
"Rosetta energy function"
] | Reject | https://openreview.net/pdf?id=8mVSD0ETOXl | https://openreview.net/forum?id=8mVSD0ETOXl | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"y-j8oEMSOQ",
"o0AT4oUuhbK",
"ivAGI8VdDg8",
"zJ-ZmYp3lsF",
"XQu3BDHepXC"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040512671,
1603936605871,
1603924066705,
1603739985454,
1603723656033
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3662/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3662/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3662/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3662/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"All four referees have indicated reject. Severe points of criticism have been raised, concerning the lacking novelty, the experimental setup and the significance and interpretation of results. I fully agree with the reviewers in all important points, so I recommend rejection.\"}",
"{\"title\": \"PREDICTION OF ENZYME SPECIFICITY USING PROTEIN GRAPH CONVOLUTIONAL NEURAL NETWORKS\", \"review\": \"This paper uses a structure-based molecular interaction graph generated from the Rosetta interaction energy function to develop protein graph convolutional neural networks (PGCN) that predict enzyme substrate specificity. \\tThe authors clearly describe the goal of being able to accurately model the substrate specificity of enzymes such as proteases, including both a description of those substrates that the enzyme recognizes in addition to those substrates that it does not recognize. They propose that this model should capture the energetics of interactions between the enzyme and potential substrates, such that substrates recognized by the enzyme are assigned lower energies by the model than substrates that are not recognized by the enzyme. To address this challenge, they propose a protein graph convolutional neural network, in which the enzyme and substrate residues are modeled as nodes, while interaction energies obtained via a pairwise decomposition of the energy of the enzyme-substrate complex are considered as node and edge features.\\n\\nThe authors provide some survey of recent literature that covers protein-substrate interactions. They can improve by discussing the contributions of papers on this subject from a wider range of research labs - the majority of papers cited in this section feature a common author. Similarly in the related work on graph convolutional networks for protein-related problems the authors should cite work such as e.g. the graph convolutional models for protein ligand interaction prediction from Torng and Altman 2019 among others. \\n\\nThe authors present results for both 'hybrid' and 'energy-only' models. The energy-only feature encoding for the other machine learning models is not clearly described, and it is not clear why the energy-only feature encoding is of interest - the authors do not describe any context in which this encoding would be used in preference to the hybrid sequence and energy feature encoding, which performs better. However, for the energy-only feature encoding the model developed in this paper always performs the best - it is important for the authors to explain why this result is of interest to the reader, since the performance using this encoding is always worse than the performance of multiple other models using the hybrid encoding. For the hybrid encoding and overall, either the ANN or SVM models perform best. \\n\\nAn important question is whether the considerably more detailed energy feature encoding of the PGCN model effectively contains information about the amino acid sequence, making the one-hot sequence encoding that is in present in the hybrid encoding but absent from the energy-only encoding redundant. This would explain why PGCN does comparatively well in the energy-only setting, compared to the much simpler coarse grained energy terms used by the other models. Overall it is very unclear what value the PGCN model and featurization adds.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"PGCN performs worse than ANN?\", \"review\": [\"This work presents a protein graph convolutional neural network (PGCN) which feeds features from Rosetta through a graph CNN to predict substrate specificity.\", \"Strengths\", \"Related work is concise but explanatory\", \"Incorporating features from Rosetta into a graph-based neural network is creative. This is a contribution / idea that could be used in other paper as well.\", \"Weaknesses\", \"The architecture is described well but there are not any ablations or explanations of how the authors converged on this architecture. Could the results be improved by iterating on the architecture?\", \"The tested baselines are logistic regression, random forest, decision tree, SVM, and ANN. However, there are no baselines from previous literature, so it is difficult to place this work in the context of the field.\", \"In Table 2 and Table 3, the default ANN from Tensorflow performs better than the PGCN. In that case, what is the advantage of the PGCN?\", \"The authors compare a \\\"hybrid\\\" (energy + sequence) to a \\\"pure\\\" approach (energy). The PGCN performs better than other methods in the \\\"pure\\\" setting, but all models perform similarly in the \\\"hybrid\\\" setting. Is there a practical reason to ever use to the \\\"pure\\\" setting? If we are able to get energies in Rosetta, than we likely have the sequence available to us. If the authors can answer this, my next question would be why is PGCN barely affected by incorporating the sequence whereas the other methods see a great improvement?\", \"From the abstract and introduction, it seems that the authors set up the problem in this way so that they could generalize to new enzymes. However, this is not tested in the paper.\", \"Additional\", \"In the introduction, the authors should be more clear about \\\"substrate sequence motifs\\\" - do they mean primary or tertiary structure?\", \"In equation 1, why is a simple weighted sum of edge features used? The individual components could be useful to the network. Why not provide the features directly?\", \"Overall, I was excited to see Rosetta features incorporated into a neural network. However, the evaluation, results, and model development are weak. These would need to be improved if the paper is to be accepted to ICLR.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #1\", \"review\": \"**Summary**\\nThe paper proposes a new method for calculating the specificity of proteases towards their substrates. The method constructs a graph with node and edge features from the Rosetta molecular energy function, and uses this as the basis for a graph convolutional neural network. The authors report performance on-par on better than existing methods, and highlight interpretability as a potential advantage of the method.\\n\\n**Strengths**\\n o The manuscript addresses and important problem\\n o The method claims to produce state-of-the-art performance with a new method.\\n o By basing the method on physical features derived from an established energy function, the method can provide some degree of explanation for the predictions made.\\n\\n**Weaknesses**\\n o The paper is primarily an application of existing machine learning methodology, rather than introducing a new method.\\n o It is unclear whether the baselines in the paper truly reflect the current state-of-the-art, and a proper train/validation/test split was employed during training.\\n o The notation used in the paper is not consistent. \\n\\n**Recommendation**\\nI recommend this paper be rejected. In my view, this paper reads more like a machine learning application than a methodological contribution. The presented technique is based on established graph neural network methods, and in the areas were it potentially deviates from this standard it provides insufficient motivation and discussion (see examples below). Since there is no systematic exploration of the methodological contributions, I do not feel that there are sufficient lessons learned in this paper to constitute a valuable contribution to a Machine Learning conference.\\n\\n**Supporting arguments for recommendation**\\nExamples with insufficient discussion/motivation of methodological choices:\\n o An identify matrix seems to be added to the edge-feature matrix A' (page 4, bottom), but it is unclear why this choice was made - and why it makes sense - since these matrices contain weighted sums of (normalized) energies.\\n o A custom dropout technique is introduced, where combinations of hidden nodes are \\\"muted\\\". It is not discussed why this choice was made, and how it compares to standard dropout.\", \"o_results_are_reported_for_two_scenarios\": \"A hybrid scenario and a pure energy scenario. However, these choices are not motivated. Are there cases where we know the energies but not the sequences? Does Rosetta even allow for calculation of energies without knowing the sequence?\\n\\nThe language used in the paper is also confusing at times. For instance, the term \\\"adjacency matrix\\\" is used to refer to an edge-feature matrix, and they discuss generalization as an \\\"anti-overfitting problem\\\". Also, the edge features that they introduce seem to be referred to by at least three different names, E (eq 1), A (eq 2), and Q/P (top of page 4), and edges are referred to by both single and double indices.\\n\\nI'm also not confident about how significant the reported results are. While Page 5 mentions a training and test set on Page 5, it is not clear how the hyper-parameters were tuned (no mention of a validation set). Finally, while they do compare their method to several methods from the literature, it was not quite clear to me whether their reimplementations were *identical* to the methods from the literature - or simplified versions.\\n\\nFinally, it is quite striking that the simple ANN baseline performs so well in the \\\"Hybrid scenario\\\". The authors should at least discuss why this might be.\\n\\n**Questions to the authors**\\nPage 5. \\\"We use PGCN... based on two sets\\\"\\nWhat is the motivation for studying these two cases. Is the amino acid sequence not always available? \\n\\nPage 5. \\\"We then compare our results with SVM models from...\\\"\\nAre these the exact same implementations, or did you merely use *some* SVM implementation. Have you confirmed that your implementations can reproduce the originally reported results of these competing methods?\\n\\nPage 2. \\\"None of methods mentioned above use energy-related features.\\\"\\nPage 5. \\\"All the other methods have bybrid feature encodings model to include ... and energy terms...\\\"\\nIt seems that these two statements are contradictory. Could you clarify?\", \"page_5\": \"\\\"The proportion of training and testing...\\\"\\nWhich dataset to you use for optimizing the hyperparameters of the model. Do you have a validation set?\\n\\n**6. Additional feedback**\", \"page_1\": \"\\\"non-recogntion\\\" (typo)\\nPage 2, line 2: \\\"Protein Convolutional Neural Networks (PGCN)\\\". Considering writing \\\"Protein Graph Convolutional Network\\\" here, so that it fits with the abbreviation.\\nPage 2, \\\"Methods use machine learning methods...\\\". Is there something missing in this sentence?\\nPage 2, \\\"None of methods mentioned above\\\" -> \\\"None of the methods mentioned above\\\"\\nPage 2, \\\"using LSTM\\\" -> \\\"using an LSTM\\\"\\nFigure 1 caption. The last line was a bit confusing to me - since it described the order of the \\\"node matrix\\\" twice. Should one of these have been the \\\"edge matrix\\\"?\\nPage 3, \\\"Substrate\\\" -> \\\"The substrate\\\"\\nPage 4, \\\"\\\\tilde A' = \\\\tilde A' + I_N\\\". I assume the first element to the right should be A'? \\nPage 6. \\\"non linear ReLU term\\\". This is not really a \\\"term\\\", is it?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting problem but not at the level of ICLR\", \"review\": \"In the work, the authors applied graph convolutional neural networks to predict enzyme specificity. The problem is very important in biology, which even can be used for designing new drugs. Essentially, the authors generated energy-related features using Rosetta and then built the graph using the molecular structure. Then, they applied the existing GCN to the problem. In the evaluation section, they compared with Logistic Regression, Decision Tree, Random Forest, SVM, ANN, to show the superiority of the method.\\n\\nHowever, the manuscript has the following major flaws.\\n1. There is no novelty in the methodology part. They only applied the existing GCN model. The manuscript is not for the audience of ICLR. \\n2. The baselines are too weak. They only compared with the classic ML methods. They did not compare with the state-of-the-art methods to predict the binding affinity. The authors can find a lot of related literature.\\n3. Even compared with the classic algorithms, the proposed method did not outperform them by a large margin. \\n4. They provide the link to the Github repository, which makes the submission not anonymous anymore. That's the reason that I did not provide related works in comment #2.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
yRP4_BOxdu | Joint Learning of Full-structure Noise in Hierarchical Bayesian Regression Models | [
"Ali Hashemi",
"Chang Cai",
"Klaus Robert Muller",
"Srikantan Nagarajan",
"Stefan Haufe"
] | We consider hierarchical Bayesian (type-II maximum likelihood) models for observations with latent variables for source and noise, where both hyperparameters need to be estimated jointly from data. This problem has application in many domains in imaging including biomagnetic inverse problems. Crucial factors influencing accuracy of source estimation are not only the noise level but also its correlation structure, but existing approaches have not addressed estimation of noise covariance matrices with full structure. Here, we consider the reconstruction of brain activity from electroencephalography (EEG). This inverse problem can be formulated as a linear regression with independent Gaussian scale mixture priors for both the source and noise components. As a departure from classical sparse Bayesan learning (SBL) models where across-sensor observations are assumed to be independent and identically distributed, we consider Gaussian noise with full covariance structure. Using Riemannian geometry, we derive an efficient algorithm for updating both source and noise covariance along the manifold of positive definite matrices. Using the majorization-maximization framework, we demonstrate that our algorithm has guaranteed and fast convergence. We validate the algorithm both in simulations and with real data. Our results demonstrate that the novel framework significantly improves upon state-of-the-art techniques in the real-world scenario where the noise is indeed non-diagonal and fully-structured. | [
"Full-structure Noise",
"Hierarchical Bayesian Regression Models",
"Sparse Bayesian Learning",
"Unsupervised Learning",
"Brain Source Imaging",
"Covariance Estimation."
] | Reject | https://openreview.net/pdf?id=yRP4_BOxdu | https://openreview.net/forum?id=yRP4_BOxdu | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"-owIPoIYl99",
"rzimGTFIPsN",
"uMAWA-nkBdK",
"y9QJY7VyEWh",
"xoL54jrSnfN",
"3_Pryikq0mr",
"6CTmbebA28Q",
"AXm_ur9Flvg",
"-aG7bvrGso",
"Vh2i8ncYCg4",
"9AN9CUTcTfF",
"Gwg2uPpqiwT"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040399336,
1606295012370,
1606121568928,
1605705039078,
1605704983608,
1605699366416,
1605695922028,
1605694344170,
1603944155523,
1603928890235,
1603881910099,
1603632736345
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3661/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3661/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3661/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3661/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3661/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3661/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3661/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3661/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3661/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3661/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3661/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper presents hierarchical Bayesian methods for modelling the\\nfull covariance structure in cases where noise dimensions cannot be\\nassumed independent.\\n\\nThis is an important problem with potential practical importance. The\\nwork is solid.\\n\\nConceptual novelty in the work is somewhat limited.\\n\\nThe method is applied in the paper on hierarchical linear\\nregression. It is claimed to be applicable to other methods as well,\\nand the claim is plausible, but to be fully convincing, results and\\ncomparisons would need to be shown. The new extended discussion does\\nhelp somewhat.\\n\\nThere was also discussion about whether ICLR is the best match for\\nthis work. This is not a strereotypical ICLR paper though is relevant.\\n\\nAuthors are encouraged to continue this line of work.\"}",
"{\"title\": \"A summary on major updates in the updated manuscript\", \"comment\": \"We thank all the reviewers sincerely for their constructive comments and valuable suggestions. We studied the reviews and discussions carefully and modified our paper accordingly. Below we provide an overview of the key changes included in [our revision](https://openreview.net/pdf?id=yRP4_BOxdu).\\n\\n*Major Updates:*\\n1. The abstract of the paper has been revised in correspondence to the comment by R4.\\n2. The structure of the introduction has reversed for reflecting a general to specific strategy regarding our proposed method instead of having a focused domain-level problem at first.\\n3. Theorem 3 is now added to the paper summarizing the convergence guarantees and implications of theorem 2. The corresponding proof is also modified, accordingly. \\n4. A real-data analysis is added to the manuscript, which accessed our proposed method for the auditory evoked field (AEF) dataset. \\n5. Discussion Section: This part has been changed dramatically for reflecting the broader impact of our proposed algorithm in correspondence to the reviews. This part can be summarized into three parts:\\n\\na) We mentioned other signal processing and machine learning problems that can be formulated into our setting, e.g., kernel width learning in GP and matrix-norm problems. \\n\\nb) We listed the papers in fMRI literature that tackles the noise learning problem. \\n\\nc) We focused on two favorable properties of our proposed method in comparison to the current works in the literature: 1) in comparison to fMRI literature that relies on using EM, we proposed an efficient MM approach with provable convergence guarantees and 2) in comparison to current noise learning approaches that either are relying on Type-I techniques or assuming full knowledge of baseline data, we proposed a joint learning scheme of source and noise within the type-II ML framework.\"}",
"{\"title\": \"Response acknowledged\", \"comment\": \"Thank you for the responses.\\n\\n1. Venue: You are certainly correct that the overall topic fits ICLR, and I did not intend to question this. The appropriateness is not a binary call, but depends heavily on the writing style and nature of the technical contribution. I still retain my opinion that the paper is not an ideal fit for this conference, even though it naturally fits within the broad borders of the topic definition. As a concrete example, both AISTATS and NeurIPS where many of your examples were published would be slightly better matches.\\n\\n2. Other models: This is one side of what I was looking for, identifying possible uses cases in other tasks. However, in this form the relation still remains on very high level. It is quite obvious full-rank noise is relevant in many applications, but the more interesting aspect of the related work would be in more detailed discussion on practical level. After all, you rely on specific learning algorithms that cannot be directly plugged in to various specialised algorithms, and it would be more interesting if you were able to point out some examples where your technique could be directly applied.\\n\\nOverall, I still think the paper is interesting and worth publishing, but in the current form does not advance the field sufficiently from the perspective of this particular venue. I encourage you to keep improving the presentation and looking for the best possible publication venue.\"}",
"{\"title\": \"Response to Reviewer 3 - Part II (References)\", \"comment\": \"[1] P. Ablin et al., \\\"Spectral independent component analysis with noise modeling for M/EEG source separation.\\\", arXiv preprint arXiv:2008.09693.\\n\\n[2] R. Prasad et al., \\\"Joint channel estimation and data detection in MIMO-OFDM systems: A sparse Bayesian learning approach.\\\", IEEE Transactions on Signal Processing, 63 (20), (2015), 5369\\u20135382.\\n\\n[3] P. Gerstoft et al., \\\"Multisnapshot sparse Bayesian learning for DOA.\\\", IEEE Signal Processing Letters, 23 (10), (2016), 1469\\u20131473.\\n\\n[4] S. Haghighatshoar and G. Caire, \\\"Massive MIMO channel subspace estimation from low-dimensional projections.\\\", IEEE Transactions on Signal Processing, 65 (2), (2017), 303\\u2013318.\\n\\n[5] M. B. Khalilsarai et al., \\\"Structured channel covariance estimation from limited samples in Massive MIMO.\\\", in IEEE International Conference on Communications (ICC), IEEE, (2020), pp. 1\\u20137.\\n\\n[6] Y. Feng et al., \\\"A signal processing perspective on financial engineering.\\\", Foundations and Trends\\u00ae in Signal Processing 9 (1\\u20132), (2016), 1\\u2013231.\\n\\n[7] B. Ottersten et al., \\\"Covariance matching estimation techniques for array signal processing applications.\\\", Digital Signal Processing 8 (3), (1998), 185\\u2013210.\\n\\n[8] K. Werner et al., \\\"On estimation of covariance matrices with Kronecker product structure.\\\", IEEE Transactions on Signal Processing 56 (2), (2008), 478\\u2013491.\\n\\n[9] K. Greenewald and A. O. Hero, \\\"Robust Kronecker product PCA for spatio-temporal covariance estimation.\\\", IEEE Transactions on Signal Processing 63 (23), (2015), 6368\\u20136378.\\n\\n[10] T. Tsiligkaridis and A. O. Hero, \\\"Covariance estimation in high dimensions via Kronecker product expansions.\\\", IEEE Transactions on Signal Processing 61 (21), (2013) 5347\\u20135360.\\n\\n[11] T. Tsiligkaridis et al., \\\"On convergence of Kronecker graphical lasso algorithms.\\\", IEEE Transactions on Signal Processing 61 (7), (2013), 1743\\u20131755.\\n\\n[12] A. M. Zoubir et al., \\\"Robust statistics for signal processing.\\\", Cambridge University Press, 2018. \\n\\n[13] A. Benfenati et al., \\\"Proximal approaches for matrix optimization problems: Application to robust precision matrix estimation.\\\", Signal Processing 169, (2020), 107417.\\n\\n[14] E. Ollila et al., \\\"Shrinking the eigenvalues of M-estimators of covariance matrix.\\\", arXivpreprint arXiv:2006.10005.\\n\\n[15] S. Kumar et al., \\\"A unified framework for structured graph learning via spectral constraints.\\\", Journal of Machine Learning Research 21 (22) (2020) 1\\u201360.\\n\\n[16] A. Flinth and A. Hashemi, \\\"Approximate recovery of initial point-like and instantaneous sources from coarsely sampled thermal fields via infinite-dimensional compressed sensing.\\\", in 26th European Signal Processing Conference (EUSIPCO), IEEE, (2018), 1720\\u20131724.\\n\\n[17] H. Wei et al., \\\"Bayesian fusion and multimodal DCM for EEG and fMRI.\\\", NeuroImage 211, (2020) 116595.\"}",
"{\"title\": \"Response to Reviewer 3 - Part I\", \"comment\": \"We thank the reviewer for insightful and constructive comments. We are also very glad that the reviewer found our contribution valuable from a statistical perspective. Here are our responses to the main points raised by the reviewer:\\n\\n1. _Submission is appropriate for the ICLR conference:_ The *\\u201cCall for papers\\u201d* of the general track of the ICLR submission page encourages authors to submit contributions in statistical fields with neuroscience applications. Besides, recently published works in prestigious ML conferences like NeurIPS, ICML and AISTAT, e.g., [[J-A Chevalier, et al. NeurIPS 2020](https://papers.nips.cc/paper/2020/hash/1359aa933b48b754a2f54adb688bfa77-Abstract.html)], [[Tao Tu, et al. NeurIPS 2019](https://papers.nips.cc/paper/2019/hash/6aed000af86a084f9cb0264161e29dd3-Abstract.html)], [[D. Sabbagh, et al. NeurIPS 2019](https://papers.nips.cc/paper/2019/hash/d464b5ac99e74462f321c06ccacc4bff-Abstract.html)], [[A. Farshchian et. al, ICLR 2019](https://openreview.net/pdf?id=Hyx6Bi0qYm)], [[M. Shvartzman, et. al, AISTAT 2018](http://proceedings.mlr.press/v84/shvartsman18a.html)] (_pointed out by the first reviewer_), [[M. B. Cai, NeurIPS 2016](https://papers.nips.cc/paper/2016/hash/b06f50d1f89bd8b2a0fb771c1a69c2b0-Abstract.html)], [[D. Bartz, NeurIPS 2014](https://papers.nips.cc/paper/2014/hash/fa83a11a198d5a7f0bf77a1987bcd006-Abstract.html)], and finally [[S. Hitziger et. al, ICLR 2013](https://openreview.net/forum?id=4eEO5rd6xSevQ)]; strongly motivate us to target ICLR as our publication venue. \\n\\n2. _Our algorithm can be incorporated into more complex graphical models:_ \\nWe agree with the reviewer on this point and will dedicate a part in the discussion section to point out the potential benefits of our method in other signal processing and machine learning fields. This section will indeed include the examples provided by the reviewer in addition to some practical examples in which model residuals are expected to be correlated; and therefore, using full-structural noise learning may also prove useful, e.g., spectral independent component analysis [1], direction of arrival (DoA) and channel estimation in massive Multiple Input Multiple Output (MIMO) systems [2-5], robust portfolio optimization in finance [6], covariance matching and estimation [7-14], graph learning [15], thermal field reconstruction [16], and brain functional imaging [17]. Besides, we will also modify the introduction to better reflect this perspective so that it could convey to the reader this fact that the idea of learning full-structural noise can be generally used in a broader aspect of hierarchical Bayesian regression problems such as variational Bayes. We have incorporated an abridged version of this point in the revised manuscript.\\n\\n3. _Incorporating technical details in the main text:_ We will modify the manuscript by moving some of the technical contributions from the appendix to the main text, including the convergence guarantees of the proposed method building on the MM framework (Theorem 9).\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the reviewer for insightful and constructive comments. We also really appreciate the positive points regarding the manuscript that are raised by the reviewer. Here are our responses to the negative aspects that are pointed out by the reviewer:\\n\\n1. *Revised Abstract:* We consider hierarchical Bayesian (type-II maximum likelihood) models for observations with latent variables for source and noise, where parameters of priors for source and noise terms need to be estimated jointly from data. This problem has application in many domains in imaging including biomagnetic inverse problems. Crucial factors influencing accuracy of source estimation are not only the noise level but also its correlation structure, but existing approaches have not addressed estimation of a full-structure noise covariance matrix. Here, we focus on sparse Bayesian learning (SBL) in regression models specifically for the application of reconstruction of brain activity from electroencephalography (EEG). This problem can be formulated as a linear regression with independent Gaussian scale mixture priors for both the source and noise components. As a departure from the classical SBL models where across sensor observations are assumed to be independent and identically distributed, we consider Gaussian noise with full covariance structure. Using ideas from Riemannian geometry, we derive an efficient algorithm for updating both source and the noise covariance along the manifold of positive definite matrices. Using the majorization-maximization framework we demonstrate that our algorithm has guaranteed and fast convergent properties. We validate the algorithm both in simulations and with real data. Our results demonstrate that the novel framework significantly improves upon state-of-the-art techniques in the real-world scenario where the noise is indeed non-diagonal and fully-structured.\\n\\n2. *Benchmarks:* Regarding algorithmic comparison, we would like to emphasize that to the best knowledge of the authors, the proposed method here is the first *ML-II* work that learns full-structural noise jointly with estimating the sources. Therefore, it would be highly appreciated if the reviewer updates us with similar ML-II works in the literature. Regarding the MCMC methods, we would like to add that due to the high-dimensional nature of existing regression problems in the biomedical field including our target application, e.g., EEG/MEG inverse problem, MCMC techniques might not be a great candidate as they suffer dramatically from their expensive computational complexity, and are not computationally feasible for the general class of large-scale regression problems. This fact has been confirmed in several works in the literature for practical examples such as magnetic resonance fingerprinting [[Metzner et. al, 2018](https://link.springer.com/article/10.1007/s10182-018-00334-0)] as well as in the line of research by Tamara Broderick. please see [[ICML 2018, Variational Bayesian inference and beyond](http://people.csail.mit.edu/tbroderick//tutorial_2018_icml.html)] for future references. The proposed method in this paper on the other hand scales very well for high-dimensional settings. Finally, as we pointed out in the discussion section, only a couple of recent type-I (MAP) techniques tackled the problem of estimating source and full-structural noise jointly. To highlight the benefits of our proposed method, we will specifically add another analysis to compare the performance of our algorithm with the existing type-I techniques with this assumption that they have\\u00a0access to a noise covariance learned from the baseline data.\\n\\n3. *Real data analysis:* We thank the reviewer for bringing to our attention the importance of having real data analysis for the ICLR conference. We certainly plan to add a real data analysis to highlight the benefits of our proposed algorithms in practical scenarios. \\n\\n4. *Open-source code:* To address the reviewer's concern, we will poolish our codes and put them on Github for making them conveniently accessible to the public. We will also put a link to this GitHub repository in the modified draft of the manuscript.\\n\\n5. *Analysis of showing algorithmic advantages:* We will add extra analysis for highlighting the benefits of our proposed algorithm compared to the previous ML-II methods in terms of the value of the negative log-likelihood loss in addition to the computational complexity analysis of the algorithms in terms of run times.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the reviewer for insightful and constructive comments. Here are our responses to the main points raised by the reviewer:\", \"we_would_like_to_emphasize_the_following_points_to_highlight_the_main_contributions\": \"1. *Tractable EM algorithm yields slow and non-convergent methods:* Although assuming Gaussian priors certainly simplify the problem, solving the *\\u201cHierarchical\\u201d* Bayesian inference problem in the presence of full-structural noise is quite involved since both the source and noise covariances contribute to the covariance matrix of the measurements, i.e., $\\\\Sigma_y= \\\\Lambda+L\\\\Gamma L^{\\\\top}$. This phenomena dramatically deteriorates the performance of algorithms that are only able to model homo- or heteroscedastic noise. It is difficult to estimate source and noise covariance simultaneously as these parameters are almost indistinguishable from a structural perspective. Note that when jointly estimating sources in the presence of homo- or heteroscedastic noise, there exist a major difference between the structure of the noise and source covariance since $\\\\Lambda$ has a diagonal structure, while $L \\\\Gamma L^{\\\\top}$ forms a full-structural matrix. We believe that using Riemannian geometry as presented in this work is the key to tackle this ambiguity. To highlight the difficulty of the inference problem, we would like to draw the attention of the reviewer to the following reference that elaborates on this matter in the context of type-I regression problem, e.g., solving Lasso in the presence of full-structural noise: [[Massias, et. al, AISTAT 2018, \\u201cGeneralized Concomitant Multi-Task Lasso for Sparse Multimodal Regression\\u201d](http://proceedings.mlr.press/v84/massias18a.html)]\\n\\n2. _Inference focusses on convergence properties and provides closed-form update rules per iteration not computational complexity:_ We completely agree with the reviewer that focusing on computational complexity is one of the major contributions of this paper compared to other techniques in this area such as the EM algorithm. But it is also worth noting that the proposed method, which is built on the MM principle, also benefits from theoretically proven convergence guarantees, which are not easily achievable by relying on the EM technique that is commonly used in the fully-Gaussian setting. It is worth emphasizing our contribution within the MM optimization context, as well. If we restrict our attention to the MM class of algorithms, the constructed surrogate convex functions are commonly minimized using an iterative approach. Our proposed MM algorithm, however, obtains a closed-form solution for optimizing the surrogate function at each iteration of the algorithm, which further advances the efficiency of the algorithm.\\n\\n3. _Broader implications for a larger class of problems:_ Regarding the novelty of the paper, we would also like to emphasize the fact that what we focus on in this article is the specific sparse regression problem within the Hierarchical Bayesian regression framework, but our work certainly has larger implications. For instance, full-structural noise learning could be replaced with other learning parameters like kernel widths in Gaussian process regression or dictionary elements in the dictionary learning problem. This perspective shows that it is straightforward to apply our procedure within more complex models with hierarchical priors where particular variational approximations lead to subproblems as defined in this paper. We now include this point in the revised discussion.\\n\\nTo address the reviewer\\u2019s concern on stronger experimental results with real data, we plan to add real data analyses within the next days in order to highlight the benefits of our proposed algorithms in practical scenarios. \\n\\nWe respectfully disagree that the sentence \\\"This paper proposes an efficient optimization algorithm for jointly estimating...\\\" is not a valid claim of the paper as the proposed method is indeed an efficient optimization algorithm that learns both noise and source covariance in a joint manner.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for insightful and constructive comments. We also really appreciate the reviewer for taking the time to carefully read the paper. We would like to stress the fact that our proposed framework applies to more advanced models like matrix-normal (MN), factor analysis (FA), and Gaussian-process (GP) models. All of these models can be decomposed into two components - learning of the source dimension and learning of the noise. Typically in all of these models, the noise is assumed to either be a scalar, or heteroscedastic or have some known structure. To our knowledge, there is no work that describes joint learning of the source dimensions with learning of full-structure noise as outlined in the current paper. We thank the reviewer for bringing to our attention these algorithms as applied to fMRI data, and we believe that our algorithm for full-structure noise learning could be incorporated to improve these algorithmic frameworks for noise robustness.\\n\\nAnother important contribution of the proposed method in contrast to existing approaches is that we propose an efficient optimization strategy with closed-form updates in each step, which is accompanied by convergence guarantees. The majority of algorithms in the literature, including the papers suggested by the reviewer, rely on the expectation-minimization (EM) framework, which is quite slow in practice and has convergence guarantees only under certain strong conditions. In contrast, our approach uses the majorization-minimization (MM) framework and by constructing tighter convex bounds on the original non-convex negative log-likelihood cost function, the proposed algorithm benefits from faster and guaranteed convergence compared to EM.\", \"here_are_more_specific_details_to_questions\": \"1. Even in the isolated case of estimating the spatial and temporal noise covariance separately, the papers currently available in the fMRI literature assume AR(1) structure for modeling the temporal noise covariance and a diagonal structure for modeling the spatial noise covariance in order to make the implementation tractable, e.g., [[Chapter 3.1, Shvartsman et. al, AISTAT 2018](https://www.sciencedirect.com/science/article/abs/pii/S1053811905002491)]: \\u201cIn practice, we restrict the form of both the spatial and temporal residuals to be diagonal or autoregressive, since estimating unconstrained $\\\\Sigma_v$ and $\\\\Sigma_t$ is still intractable at fMRI scale.\\u201d Please also see the following Github repositories links, [link 1 for Bayesian RSA example](https://github.com/brainiak/brainiak/blob/master/examples/reprsimil/bayesian_rsa_example.ipynb) and [link 2 for MN-RSA](https://github.com/brainiak/brainiak/blob/master/examples/matnormal/MN-RSA.ipynb), regarding the implementation details that have been considered for these methods [[Chapter 5.2, Shvartsman et. al, AISTAT 2018](https://www.sciencedirect.com/science/article/abs/pii/S1053811905002491)], [[Ming Bo Cai, NeuriPS 2016]](https://proceedings.neurips.cc/paper/2016/hash/b06f50d1f89bd8b2a0fb771c1a69c2b0-Abstract.html). We can easily show that our full-structure noise updates can be incorporated within this framework. Furthermore, some papers in the EEG/MEG literature have shown that using a combination of different spatiotemporal structured noise to model richer structures does not significantly improve reconstruction (source dimension estimation) with great increases in computational complexity [[Bijma et. al, NeuroImage 2002](https://www.sciencedirect.com/science/article/abs/pii/S1053811903002155)], [[Bigma et. al, NeuroImage, 2005](https://www.sciencedirect.com/science/article/abs/pii/S1053811905002491)]; and therefore, the source localization accuracy is sufficiently enhanced by taking into account the spatial correlations only. \\n\\n2. We would like to point out that it is possible to extend our proposed method to low-rank constraints on the noise covariance. This is because the basic update rule for the noise learning does not significantly change, and only extra updating rules need to be added to the estimation procedure. We are currently working on including such constraints in our model. To elaborate this more, we should note that low-rank assumption can be incorporated using that the noise can be decomposed by the Cholesky technique, e.g., let $\\\\Lambda =AA^{\\\\top}$, where $A$ is a low-rank matrix. Therefore, it is really straightforward to embed the assumption of low-rank noise into the full-structural noise updating rule by replacing $\\\\Lambda$ with $AA^{\\\\top}$, and then estimating matrix $A$ instead of $\\\\Lambda$. We are specifically exploring incorporating low-rank assumptions within a Riemannian framework so that we can exploit the full features of this approach.\\n\\n3. Given the theoretical nature of the conference, we did not include real data analyses; however, we will add real data analyses to the paper within the next days in order to illustrate the efficacy of our approach.\"}",
"{\"title\": \"an effective optimization method for full noise covariance estimation but the novelty is not strong enough\", \"review\": \"The paper proposes an efficient optimization method for estimating the full noise covariance in a hierarchical Bayesian framework. It's shown in the experiment that the optimization method could recover the true noise covariance in a simulated example and estimating the full covariance has better performance than homo- and heteroscedastic covariance.\\n\\nI think the proposed method is an effective tool to estimate the full noise covariance especially for the problem setting in this paper. But the overall novelty and contribution are not strong enough for the ICLR community.\\n\\nPapers in fMRI literature [Michael Shvartsman et al 2017, Anqi Wu et al 2019] have proposed to work with full noise covariance in more complicated models such as factor analysis, Gaussian process regression. The basic model in this paper is a bit too simple compared with other models preventing from making significant methodological contributions. It might fit a signal processing or brain source imaging specialized publication better.\\n\\nAlso in many applications (especially with brain data), it's shown that a full rank noise covariance is not always preferable given that there are usually some correlations among measurements that lead to lower dimensional subspace at the noise level. So I'm not quite sure whether a full covariance without any structural or subspace assumption would really outperform low-rank full covariance when applying to the real data.\\n\\nAnother issue in this paper is there is no real data application. I'm not very convinced that simulated data generated from a realistic lead field matrix is considered as the real-world data.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Well known setting in the literature, limited experimental validation.\", \"review\": \"The authors propose a methodology for type-II maximum likelihood on a hierarchical Bayesian model for EEG signals. The particular feature of the model, which separates it from other EEG models, is the consideration of a full covariance matrix which makes the noise correlated and heteroskedastic.\\n\\nThe model, as claimed by the authors, is fully Gaussian and therefore tractable. As a consequence, the inference poses no challenges other than the computational complexity. To address this, the authors propose a mechanism for, what they claim is, efficient optimisation. This contribution alone is not sufficient (over the standard literature) for publication as a theoretical improvement. \\n\\nGiven the lack of a theoretical advancement, I was hoping that the contribution of the article came in the experimental treatment, however, it was not the case. A single set of experiments using synthetic data was considered, where the proposed method was compared against other benchmark. It is far form surprising when the authors deal with exact inference on a model where the observations where produced under the same statistical assumptions.\\n\\nI also would like to emphasise that the discussion of the paper states that \\\"This paper proposes an efficient optimization algorithm for jointly estimating....\\\" and \\\"The benefits of our proposed framework were evaluated within an extensive set of experiments \\\". None of these claims are true or at least they not validated by any supporting evidence in the paper. \\n\\nPerhaps with the stated future work and stronger experimental results (real data), this paper can be improved.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Improved treatment of correlated noise for linear regression, but somewhat odd choice of publication forum\", \"review\": \"Summary:\\nThe paper generalised Type-II ML regression models for scenarios where different noise dimensions cannot be assumed independent, but instead one needs to model the full covariance structure. This is clearly an important problem and it is well motivated in the work.\", \"reasons_for_score\": \"I recommend rejecting the paper even though it represents high-quality work in statistics, because I think it is somewhat tangentially related to ICLR and the contribution would be better appreciated in a different venue.\", \"strong_points\": \"(1) Addresses an important problem. (2) Seems to work well in practice\", \"weaknesses\": \"(1) Limited conceptual novelty. (2)Technical contribution hidden in Appendix\", \"detailed_review\": \"The work addresses a relevant statistical question of accounting for correlated noise in hierarchical linear regression, but feels somewhat of a poor fit for ICLR. It formally fits within the scope, but still feels out of place in the sense that neither readers interested in the theoretical contributions nor people looking to apply these methods would consider ICLR as a natural venue to look for the information. The development is restricted to a specific, relatively simple, model family that is frequently used in several fields but that is not at the core of the ICLR community. This is highlighted also by the fact that the technical contribution is largely in statistical properties of the covariance estimator, and for this audience gets hidden in the Appendix. Consequently, I believe that paper would much more naturally fit into a publication forum in statistics.\\n\\nThe proposed approach itself is sound and well developed. Accounting for correlated noise is a very obvious thing to do, but the technical details are non-trivial. The authors rely on Riemannian optimisation for covariance matrices and are able to use the recent Champagne algorithm for SBL. The detailed derivation of Theorem 2 shows non-trivial technical contribution, but remains somewhat isolated as it is hidden in the Appendix. For example, there is no discussion on whether the result derived here would have uses also in other model families. I can see several potential uses for better tools for learning full covariance noise e.g. in matrix factorisation models (e.g. probabilistic CCA relies on covariance estimates) or non-linear regression models, but the authors do not discuss this at all. A proper discussion on this would be important to link the work more closely to the broader activities in the field, to extend the contribution beyond the current viewpoint of a very specific model.\\n\\nThe empirical experiments are well carried out and demonstrate the value of learning the full covariance matrix compared to methods that only operate with diagonal noise. This is sufficient, since no clear comparison methods accounting for full covariance are available.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Well written and interesting. Experimental evaluation could be improved. Novelty limited.\", \"review\": \"Joint Learning of Full-structure Noise in Hierarchical Bayesian Regression Models\", \"summary\": \"The paper argues that modeling the full covariance structure in a sparse bayessian learning setting leads to significantly better results in eeg inverse problems. The paper details a majorization-minimization type algorithm leading to a set of fairly simple update rules. The proposed method is evaluated on simulated data.\", \"positive\": \"1. The proposed method is well motivated and the problem is highly relevant.\\n2. The mathematical details regarding the algorithm are presented in sufficient detail.\\n3. The paper is well written and easy to follow for the most part.\\n4. Experiments are reasonable and presented clearly.\", \"negative\": \"1. The abstract could be improved to more clearly describe the problem and contributions of the paper in a self-contained manner.\\n2. Has this particular problem (sparse bayesian regression with full covariance noise) not been considered by others? The main contribution, in my view, is algorithmic; which other algorithms have been used previously for this type of problem? (I think both ML-II and MCMC and possibly other methods have previously been used.) I would have liked a review and experimental comparison.\\n3. While the experimental evaluation is reasonable, I think the paper would benefit from a demonstration and benchmarking with competing approaches on a real data task.\\n4. Experiments on simulated data highlighting more clearly the *algorithmic* advantages of the proposed method would be appreciated.\\n5. I did not notice a link to software implementing the proposed method? Sharing software implementations will significantly strengthen the contribution and allow the community to reproduce the results.\", \"recommendation\": \"Weak reject.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
01olnfLIbD | Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching | [
"Jonas Geiping",
"Liam H Fowl",
"W. Ronny Huang",
"Wojciech Czaja",
"Gavin Taylor",
"Michael Moeller",
"Tom Goldstein"
] | Data Poisoning attacks modify training data to maliciously control a model trained on such data.
In this work, we focus on targeted poisoning attacks which cause a reclassification of an unmodified test image and as such breach model integrity. We consider a
particularly malicious poisoning attack that is both ``from scratch" and ``clean label", meaning we analyze an attack that successfully works against new, randomly initialized models, and is nearly imperceptible to humans, all while perturbing only a small fraction of the training data.
Previous poisoning attacks against deep neural networks in this setting have been limited in scope and success, working only in simplified settings or being prohibitively expensive for large datasets.
The central mechanism of the new attack is matching the gradient direction of malicious examples. We analyze why this works, supplement with practical considerations. and show its threat to real-world practitioners, finding that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
Finally we demonstrate the limitations of existing defensive strategies against such an attack, concluding that data poisoning is a credible threat, even for large-scale deep learning systems. | [
"Data Poisoning",
"ImageNet",
"Large-scale",
"Gradient Alignment",
"Security",
"Backdoor Attacks",
"from-scratch",
"clean-label"
] | Accept (Poster) | https://openreview.net/pdf?id=01olnfLIbD | https://openreview.net/forum?id=01olnfLIbD | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"3lGdK3neDyH",
"JPxZUZn7-pK",
"5u-8sh4dQh3",
"nHg4LNDJata",
"Jyx88gL6pci",
"qaG09SkNDSJ",
"A389m8x3HYC",
"JcUtpsosZzr",
"yu44fjLZro",
"SfrDchiFuvp",
"cN9U40kfsM",
"vfl_sgowWkJ",
"8S8UNSdyvc3",
"vhPesayC7X9",
"38h-AaEjjBd",
"Zy418x0vOS"
],
"note_type": [
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1614593671480,
1610040512746,
1606303175947,
1606302121958,
1606296254510,
1605642901958,
1605302512686,
1605302480538,
1605301921994,
1605301104611,
1605299541030,
1605297552081,
1604597206378,
1603999085624,
1603935944735,
1602966188336
],
"note_signatures": [
[
"~Jonas_Geiping1"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3659/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3659/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3659/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3659/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3659/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3659/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3659/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3659/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3659/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3659/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3659/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3659/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3659/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3659/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Version uploaded\", \"comment\": \"Dear Program Chairs,\\n\\nwe have uploaded a final version of this work, including additional experiments and discussion regarding multiple targets (See appendix F.8), a table showing clean validation accuracy (appendix F.9), and textual improvements to introduction and related work in direct response to many helpful suggestions from the reviewers - especially focusing on providing an improved understanding of the discussed attack within a wider taxonomy of data poisoning attacks.\", \"the_implementation_provided_as_supplementary_material_is_maintained_at_https\": \"//github.com/JonasGeiping/poisoning-gradient-matching.\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper presents a scalable data poisoning algorithm for targeted attacks, using the idea of designing poisoning patterns which \\\"align\\\" the gradients of the real objective and the adversarial objective. This intuition is supported by theoretical results, and the paper presents convincing experimental results about the effectiveness of the model.\\n\\nThe reviewers overall liked the paper. However, they requested a number of clarifications and some additional work, which should be incorporated in the final version (however, the authors are not required to use the wording as poison integrity/ poison availability). In particular, it would be great to see the experiment the authors suggested in their response to Reviewer 2 about the effectiveness of their method for multiple targets (this is important to better understand the limitations of the proposed approach).\"}",
"{\"title\": \"Final remarks\", \"comment\": [\"Thanks for the prompt reply. Again, I think we substantially reached a good level of agreement.\", \"I know that both integrity and availability poisoning attacks can be casted as a bilevel optimization. It only changes which points you select for the outer loss. However, this was not the main point of my review.\", \"I only think that the authors should work towards improving clarity. As they also recognize, the literature is quite confusing, and we have an opportunity for clarification. I suggest the authors to:\", \"Clarify and narrow the scope of their attack immediately from the title and abstract. State that you are considering clean-label targeted/integrity attacks from the very beginning; and remove all the claims of large computational complexity which would better refer to availability attacks (e.g., in the abstract: \\\"Previous poisoning attacks ... being prohibitively expensive for large datasets\\\" - e.g., this is not the case for metapoisoning or poison frogs, which are your direct competitors).\", \"Clarify your threat model in context. I think that the definition in 3.1 is fairly clear, even though some choices should be better motivated. In which practical cases the gray-box assumption that the architecture is known to the attacker makes sense? This should be motivated, and overall, it should be clarified how this attack makes different assumptions from other clean-label or backdoor poisoning attacks, if any, and why.\", \"State that this attack may be also used in the context of poisoning availability attacks in future work (I fully agree with the last point of your answer).\", \"Overall, I really like to thank you for the fruitful discussions, and hope that my feedback can be useful to improve your work.\"]}",
"{\"title\": \"Answer to Final Comments - Both availability and integrity attacks are bilevel optimization problems\", \"comment\": \"First off, we're very grateful for both your perspective and the substantial interest in clarifying this work - really.\\nWe will use this discussion to clarify our writing and improve our presentation of the taxonomy of poisoning attacks, and better differentiate the subfield of clean-label targeted poisoning attacks from other attack scenarios.\\n\\n\\n**Some additional comments regarding points raised in the final comments:**\\n* While we like the distinction between backdoor attacks and clean-label attacks proposed in this review, the general literature is sadly much less precise. Other works (and also we in the current version) use \\\"backdoor\\\" as synonym of \\\"poison integrity\\\". Under this definition our attack is also a backdoor attack. Our related work section differentiates backdoor - \\\"poisoning attacks\\\" like ours from \\\"backdoor trigger attacks\\\" (such as Saha2019) based on the criterion that \\\"trigger\\\" attacks are allowed to modify both training and testing images [but not labels], whereas \\\"poisoning attacks\\\" may only modify images from the training set [but not labels]. \\n\\nAn orthogonal direction to this would be whether the attacker is allowed to modify not only the training set, but also provide a pretrained, but backdoored model, which implies partial control over the (pre)-training phase or control over access to training data. So clean-label attacks are backdoor attacks, depending on the definition.\\nOur threat model in Sec. 3.1. defines the clean-label targeted-poisoning setting that we consider unambigously.\\n\\n* The defender in the \\\"clean-label\\\" setting (as defined in your second paragraph) is indeed allowed control over training data, but the crucial challenge for the defender is that there is no additional \\\"clean training data\\\" available. This makes defenses difficult as the defender has to assume that any data used for comparison has also been poisoned. As such, many defenses that rely on classification of poisoned and non-poisoned data fail because they have no basis of comparison. Only defenses based on unsupervised anomaly detection can still work in this setting - however many of these defenses are based on anomaly detection in feature space. Yet, we show that the feature space is non-anomalous after our attack (in Fig. 4a).\\n\\n* Important: Both \\\"poison availability\\\" and \\\"poison integrity\\\"/\\\"targeted poisoning\\\" are formally bilevel optimization problems. In both cases, the lower-level problem is the training of a model parametrized by theta with respect to some data x, whereas the higher-level problem is either to maximize the loss over held-out data in the case of poison availability, or to minimize the loss of some specific held-out target image in the case of targeted poisoning. For both, the higher level problem depends on model parameters theta which themselves depend on x, so that the overall objective can be optimized w.r.t to x.\\nHowever in practice this objective is infeasible to solve and all attack schemes have consider heuristic or approximative solutions in some shape or form.\\n\\n* Poison Frogs is also a clean-label targeted attack, the only difference between our threat model and the threat model in Shafahi2018 is that there the feature extractor is kept fixed (and known to be fixed by both the attacker and the defender). MetaPoison is also a clean-label targeted attack, and it considers the same threat model that we consider (where the feature extractor is allowed to change and unknown to the attacker).\\n\\n* We will work on clarifying our threat model textually. As far as we understand currently though, the definition in 3.1 is unambiguous - if this is not the case, we would be very grateful for information where ambiguity remains.\\n\\n* We provide a complexity/effectiveness plot in Fig.10 which also includes poison frogs. Poison frogs has a similar complexity to our attack - however the attack fails in the from-scratch scenario, possibly because it does not approximat the bilevel objective well enough. In contrast, MetaPoison manages to approximate the objective better and leads to stronger attacks, but this comes at a signififcant computational complexity. Our attack is the first to attack the difficult scenario of from-scratch training with a method that is roughly as complex as poison frogs, but even stronger than MetaPoison. We will add additional clarification regarding this. \\n\\n* Note that we never compare to poison availability attacks in this work and do not run experiments in this setting. It is conceivable that the proposed gradient matching is also able to work in the poison availability setting (this would require the replacement of the currently considered target gradients with a negative gradient sample from the validation set), but we think this is a significantly different scenario that is better suited for future work.\\n\\n\\nAgain we're glad to have this discussion and will revise and clarify our work accordingly.\"}",
"{\"title\": \"Response to authors' rebuttal\", \"comment\": \"Please refer to the updated review. Comments can be found at the end.\"}",
"{\"title\": \"My review comments are well addresed\", \"comment\": \"I thank the authors for providing additional numerical results and clarifying my questions. I have no further comments. Neat work!\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for your support and sharing our enthusiasm about this work. We will gladly fix the typo in proof 1.\"}",
"{\"title\": \"Answer to Reviewer 2\", \"comment\": \"**\\u201cAlthough the attack model still requires knowing the network architecture (gray-box setting), the resulting poisoned datasets are more effective against different initializations, and some techniques (e.g. model ensemble, multiple restarts) are proposed to further boost the attack performance.\\u201d**\\nWhile the attack is most successful when the attacker has knowledge of the victim\\u2019s architecture, we stress that this is not a requirement for a successful attack, as demonstrated in Table 3, with our Google Cloud AutoML results and Fig. 13. In Table 3 and Fig.13 we show that attacks directly transfer to other architectures. We also show that an ensemble of several architectures can attack any of the ensembled architectures and as such the attacker can ensemble common architectures. Lastly, for the google Cloud experiments the architecture is entirely unknown.\\n\\n**\\u201cOne limitation from Appendix A.8 is that the proposal may not scale well to more than 1 target image, as indicated by the rapidly decreasing attack accuracy. It will be more meaningful to control the effective budget/target and check the resulting accuracy of different number of targets, in order to understand whether gradient matching is scalable to multiple-target setting.\\u201d** \\nWhile you are correct that the attack success decreases (in percentage) as we increase the number of targets, this is for a fixed budget. It would be interesting going forward to test the effect of scaling the budget with the number of targets.\"}",
"{\"title\": \"Answer to Reviewer 1\", \"comment\": \"**1. Attack assumes access to exact training data**\\n\\nThe assumption of knowledge of full training set is a white-box scenario in which the attack is most dangerous. However, we do conduct experiments where the poisons are created on a different dataset than the victim is trained on. We show these in appendix Table 7, where we find that poisons are still effective even if only a subset of CIFAR-10 is known to the attacker. Also note that the proposed attack only needs access to a pretrained model trained on a dataset similar to the victim dataset and access to only the subset of data that is supposed to be poisoned - only these images are included in the gradient matching objective. Attacks where all data is modified are possible within our framework, this would correspond to a budget of 100%, and likely consider a smaller perturbation - but we did not focus on these attacks, because the attacker might only be able to modify a small subset of data. \\n\\n Attacks where all the data is modified are certainly possible, but we focus on the more strict threat model wherein an attacker might have knowledge about what training data a victim will use, but only be able to modify a small portion of this data.\\n\\n\\n**2. From scratch**\\nThe question is on what data would the victim pretrain? If the dataset is poisoned then even this pretraining would happen using poisoned data. Are you referring to a transfer learning setting like the one discussed in Shafahi et al? \\nOr is the question whether the gradients should be aligned based on an early epoch? We include a comparison in Fig 12b, where we analyze the strength of the attack when using a pretrained model that is trained for fewer epochs - we find that using pretrained models from later epochs leads to more successful attacks. Note that although the gradient signal is small in magnitude, the magnitude is cancelled in the cosine similarity. We also analyzed the effects of the poisons at different stages of training, see Fig. 9.. We find that in general, the victim begins to misclassify the target image in the last 20 epochs, leading us to believe the poisoned gradient starts to take hold later in training. \\n\\n**3. Test accuracies**\\nPlease refer to our general comment for information about clean test accuracies.\\n\\n**4. \\\"single differentiation\\\"**\\nWe will clarify this statement - the statement is in the context of other bilevel methods, which need several evaluations of \\u201cthe\\u201d gradient $ \\\\nabla_x \\\\nabla_\\\\theta \\\\mathcal{L}(\\\\cdot)$ to take a single step - but indeed two backpropagations are necessary to compute an update to the poisoned data.\\n\\n**5. Strong Focus on Metapoison:** \\nWe focus on MetaPoison because to our knowledge, this is the only other method that performs targeted, clean-label poisoning from scratch. We do agree though that the methods are quite different in their motivation/approaches. \\n\\n**6. Writing**\\nThank you for the comments. We made some figures in-line/with double subfigures which may have made the legend hard to read without zooming in. We will expand these figures for the final version, and can post an enlarged version to the appendix in the meantime. \\nAs for requiring access to the test set, there is a subtle, but important difference in the attacker assumptions. For poisoning attacks like ours, we only require the attacker to have picked out a specific target instance that they wish to have the victim misclassify. We do not assume the attacker can modify this target image, as is the case with backdoor attacks. One could imagine this distinction being important if the attacker wishes to poison a facial recognition system where the target is an unsuspecting third party, the attacker will not be able to add perturbations to the images of this party\\u2019s face. Also, if the target is entirely unmodified, then there is no chance for a defender to sanitize the target image at test-time, as is possible for backdoor attacks (see e.g. \\u201cWang et al. Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks\\u201d - a test-time defense against trigger patches).\"}",
"{\"title\": \"Answer to Reviewer 4 - Minor Comments\", \"comment\": \"* \\u201cThe Poison Frogs attack is described in Section 2, marking as a drawback the fact that it only works with fine-tuning. It is not clear however why this is a limitation, as one could train the model with normal training and add the poisoned data in the last epochs.\\u201d\\nThe attacker does not control how the victim trains their model. Poison Frogs works in the setting wherein the victim is using a transfer learning/fine-tuning training strategy, but not in the setting wherein a victim trains a randomly initialized model from scratch. Your proposed strategy is therefore not possible, firstly because it requires the attacker to gain knowledge of the new feature representation of the victim, and secondly because it requires the attacker to insert poisons in the middle of training. The attacker has no control over when poisons are inserted, they can only provide a modified dataset, as outlined in the threat model.\\n* \\u201cIn Algorithm 1, step 9: what is the update being performed? It seems to me that the pseudo-code does not capture the entire processing steps, hence making the whole work hard to reproduce.\\u201d\\nThe update being performed is one step of the first-order descent algorithm Adam, using the sign of the gradient, on the perturbation to the poisoned images where the objective is defined in eq. 3. We also include publicly available code in our submission to help reproduction of results. \\n* \\u201cFigure 2 shows gradient alignment along epochs (please report the axis labels), however it does not seem \\\"flat\\\" in the end, it is slightly decreasing. What happens if we increase the number of epochs? Will the alignment disappear?\\u201d\\nThe alignment slowly decreases, but stays positive when increasing the number of epochs. This is a consequence of training. It is actually the bound (right-hand side of Eq.(4)). that remains stable over additional epochs, see Fig. 5. \\n* \\u201cEq. 1 shows no constraints on the data points staying in the feature space after perturbations. Is it considered during the experiments?\\u201d\\nWhile this could be a heuristic of the attacker, this is not a part of the targeted poisoning objective we consider. Note however that the feature representations of the poisons after training are not anomalous for their given class.\"}",
"{\"title\": \"Answer to Reviewer 4\", \"comment\": \"**Different notions of data poisoning (Paragraph 3: \\u201cFirst, \\u2026 caused by the attack\\u201d):** Thank you for your suggestions on nomenclature. We indeed refer to the definitions in Barreno et al, but follow the nomenclature used in recent works of Shafahi et al and Huang et al. We will make a point to clarify the notation and mention the poison integrity/ poison availability nomenclature..\\n\\n**Complexity in the setting of from-scratch victim training (Paragraph 4: \\u201cWhile this work \\u2026 not mistaken\\u201d):** We will amend the sentence on the complexity of targeted attacks to specify the from-scratch setting. This is crucial because Koh et al. only consider a frozen feature extractor, simplifying the optimization problem significantly, compared to the \\u201cfrom-scratch\\u201d setting. Also, the DogFish dataset is simply 900 images of dogs and 900 of fish - this is not comparable to the >1,000,000 images in full ImageNet.\\n\\n**\\u201cFrom-scratch\\u201d setting (Paragraph 5: \\u201cAnother important \\u2026 fair enough\\u201d):**\\nThe attacker does not control the training routine of the victim. While the attacker can certainly use a pretrained network to craft poisons (as we do in our method), a victim will train a new network from a new random initialization - a setting where previous transfer based attacks fail - because they crucially rely on the feature extractor being fixed. These attacks break when the feature representation changes! We confirm the suspicion that these transfer based attacks do not succeed in the from-scratch setting in Table 2, where we replicate several previous methods, but train a victim from a new random initialization using the poisoned dataset.\\n\\n\\n**\\u201cClean-label\\u201d description of attacks (Paragraph 6: \\u201cThere are parts \\u2026 trigger on the image\\u201d):**\\n* Clean-label attacks are more insidious, and realistic than label flipping attacks since clean-label attacks do not assume the attacker is also the labeler of the victim\\u2019s data. Many industrial practitioners will simply collect unlabeled data from the internet, and label it themselves (or employ services like Amazon Turk). Therefore an attacker cannot rely upon being able to incorrectly label any specific image. Moreover, targeted attacks become trivial in the label-flipping regime since the attacker could simply introduce the mislabeled target image into the victim\\u2019s training set. \\n* Our attacks are still \\u201cclean-label\\u201d in the sense that these images are possibly noisy, but still undeniably images of e.g. dogs. Furthermore, the perturbations may be noticeable to a reader of the paper because we include the clean base images above, but an unwitting practitioner might not think twice about the poisoned images - the images pass a cursory glance from a human worker. Finally, we show our attack is successful for lower epsilon values, which are imperceptible perturbations. These can be found in fig. 3. \\n* Designing a detector against adversarial attacks is actually surprisingly difficult, see (Carlini & Wagner, 2017 - cf. References section). That work discusses evasion attacks, however the same considerations hold in the targeted poisoning setting. We also test our attack against defenses that are meant to detect anomalies (see Fig 4).\\n* We\\u2019d also like to point out that the considered threat model (small epsilon, small budget, clean-label attacks) has been an active field of past research on poisoning attacks against deep neural networks. \\n\\n\\n**Defensive strategies (Paragraph 6:)**\\nThank you for pointing this out - yes, we only considered defenses that are known to work against targeted attacks and we will clarify this in the revised version of our paper. For instance, the defense in Peri et al. successfully removes 100% of the poisons generated by poison frogs and convex polytopes. cf. Peri et al., and Hong et al. show that differential privacy reduces the effectiveness of Poison Frogs by 38.36%. Taking defenses that are specifically developed for targeted attacks on deep networks we compare to seems the most expressive numerical evaluation for our work. Furthermore, all but three of the defenses in Table 1 of https://arxiv.org/abs/1910.03137 require access to clean training data, an assumption that does not apply in our setting. These three remaining defenses (Tran et al., and Chen et al., Chen et al.) all rely upon the heuristic that poisons will be anomalous in feature space - an assumption we show does not apply for our attack. Furthermore, many of these defenses are in the setting of backdoor attacks, where a fixed, easily spotted patch is added to all poisons, not individually crafted perturbations as in our attack. \\n\\n**Hyperparameters and clean validation accuracy (Paragraph 7: \\u201cFinally, the experimental \\u2026 of the model\\u201d)**\\nNote that these parameters refer to variables introduced in Alg. 1. We will clarify this and backreference Alg. 1. Please see the above general comment regarding the definition of \\u201cpoison success\\u201d and the natural validation accuracy of the poisoned models.\"}",
"{\"title\": \"General Comments\", \"comment\": [\"We thank the reviewers for their constructive feedback. We will respond to specific points raised under their respective reviews. Here, we will respond to common concerns:\", \"1) On the readability of figures, specifically Fig. 4: in all figures, we will make it more clear in the main body what \\u201cpoison success\\u201d, or equivalently \\u201cpoison accuracy\\u201d means. For convenience, this value refers to the percentage of runs (averaged over randomly initialized networks) in which the target image is mis-classified as the poison class, see the appendix for more details - we will make this clearer in the main text. As for the size of the text in Fig. 4, if one zooms in on a computer screen, the text becomes readable. However, we recognize this is an inconvenience, and not possible on a printed version, so we will make each subfigure its own figure, and expand the legend size.\", \"2) The question of validation accuracy of poisoned datasets on clean images was also a common concern. However validation accuracy is unaffected. Due to the considered threat model (small epsilon, small budget of 1%), the attack, as alluded to in the introduction, does not noticeably degrade the clean validation accuracy.\", \"To emphasize this with actual data, we have included below the validation accuracy for the baseline experiments in the inset figure (subsection 5.1). These are the values for validation accuracy on CIFAR-10, for the poisoned dataset and the clean dataset:\", \"K=1,R=1: poisoned: 92.12, clean: 92.25\", \"K=2,R=1: poisoned: 92.06, clean: 92.16\", \"K=4,R=1: poisoned: 92.08, clean: 92.18\", \"K=8,R=1: poisoned: 92.20, clean: 92.16\", \"K=1,R=8: poisoned: 92.08, clean: 92.22\", \"K=2,R=8: poisoned: 92.03, clean: 92.27\", \"K=8,R=8: poisoned: 92.04, clean: 92.13\", \"All values are averages over their respective runs. We will include a table with these natural accuracies in the updated appendix.\"]}",
"{\"title\": \"Review for Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching\", \"review\": \"Summary: This paper introduces a novel targeted clean-label poisoning attack, expected to be more efficient and scalable than current ones. The attack is formulated as a bilevel problem which is then solved with a (fast) heuristic approach based on aligning the gradients of the inner and outer objective functions. A theoretical analysis is also reported to show that this strategy consistently finds a descent direction for the outer objective, asymptotically converging to (a local) minimum.\\n\\nFirst of all, I like how the authors have derived their attack and its heuristic solution, and I'm wondering if this generalizes to other applications where bilevel problems are at the core, like meta learning or hyperparameter optimization. However, I have several concerns on the presentation and soundness of the reported results.\\n\\nFirst, I think that this paper makes confusion (at least in the reader's mind) when introducing the data poisoning problem. At the beginning, there is no clear distinction between the two main goals of data poisoning:\\n(1) poisoning availability attacks, which aim to increase the test error causing a denial of service, and\\n(2) poisoning integrity attacks (which are often referred to -in a misleading manner- as targeted attacks), which aim to allow specific intrusions/attacks at test time (backdoor attacks belong to this category). \\nFor a clearer nomenclature/definition see: M. Barreno, B. Nelson, A. Joseph, and J. Tygar. The security of machine learning. Machine Learning, 81:121\\u2013148, 2010. \\nThere, indeed, targeted/untargeted is referred to the victim user, not to the goal/security violation caused by the attack.\\n\\nWhile this work seems to claim that, in general, poisoning attacks are computationally demanding, a distinction should be made. While poisoning availability attacks are typically much more computationally demanding (they do require solving the bilevel optimization problem to work well) and this heavily hinders their scalability to large datasets, poisoning integrity attacks can be quite efficient (there's no need to solve a bilevel problem for them to work well, as in the case of most of the considered competing attacks in this paper, or anyway the approach can be simplified - see, e.g., Koh et al., ICML 2017, where the whole network was frozen and the bilevel problem was solved only by assuming that the parameters of the last layer were updated).\\nI believe that this aspect should be clarified from the beginning. First, this paper focuses on *targeted* (or integrity) poisoning attacks, and this should also be clear from the title. Second, sentences related to the overwhelming complexity of targeted/integrity poisoning attacks should be revised (e.g., Koh et al., ICML 2017 also worked on the DogFish data which should be a subset of ImageNet, if I'm not mistaken). \\n\\n\\nAnother important issue which I do not completely understand is what the authors mean with the word \\\"from scratch\\\", from the viewpoint of previous attacks. I agree with them that previous attacks designed to work on pre-trained models with fine tuning may not work against models which are trained from scratch, but what prevents the attacker to train a model from scratch on the clean data and then design their attack samples with fine tuning? The attack samples can then be added to the initial training set to see if, when learning again from scratch on the poisoned data, the attack remains effective or not. Is this the setup that the authors have considered in their paper for such attacks, or they run them against an \\\"untrained\\\" (or not fully trained) model? If we are in the second scenario, I don't think the analysis reported should be considered fair enough.\\n\\n\\nThere are parts in the paper where it is claimed that 'clean-label' attacks are in some sense better than label flips or attacks that do not preserve image semantics. Why? Are we expecting human labelers to check the quality of our data?\\nOr are we expecting that clean-label attacks are harder to spot?\\nBoth questions are unaddressed in this paper.\\nFirst, I don't think that in many realistic scenarios humans are expected to cleanup the whole training set, especially when it contains a lot of samples. Second, it's also true that the level of noise used in this paper is not so small. By zooming in Figs 6-8, the perturbation becomes quite visible even to the human eye.\\nHence, we cannot only instruct humans to detect these patterns, but we can probably train detectors to do that automatically. Accordingly, I don't see in which practical, relevant application scenarios \\\"clean-label\\\" can be retained useful as a requirement.\\nFinally, even though the authors have analyzed the robustness of their attack against some defense mechanisms, the defenses considered aim to detect mostly poisoning availability attacks and NOT backdoor attacks or targeted/integrity poisoning.\\nI am even skeptical that such methods can detect label flips or even other current attacks. Have the authors tested such defenses against the competing approaches (poisoning frogs, convex polytopes, etc.)? Do these attacks work or not against them?\\nHow do detection methods for backdoor attacks work against the proposed attack? For a list of such detection methods, see, e.g., Table 1 in https://arxiv.org/abs/1910.03137 (note that some detection methods should work against clean-label attacks too, there's no need to put a trigger on the image).\\n\\n\\nFinally, the experimental section is missing key information for reproducing the experiments. The parameters \\\\tau, R and M are given a value but not a definition. The figure with the average accuracy vs. time is missing a caption and a figure label. It is extremely unclear what this figure is showing as 1) the parameters are missing descriptions 2) the metric used for evaluating the figure is described nowhere in the paper. This problem also extends to tables 1, 2 and 3: a clear definition of the \\\"evaluation metric\\\" should be given. Are we interested more in preserving accuracy or in the attack success? How is poisoning success defined? (this might be explained in the supplementary material, but it is important for understanding the whole results). Why not including a plot with poisoning success vs. accuracy of the model?\\n\\nTo summarize, the paper is promising, but important details and clarifications are still needed. The experimental section and the way results are presented needs major improvement, as it is hard to tell if the attack is working and how efficient and effective it is from the data presented in this paper. \\n\\n\\n\\n** Minor comments: ** \\n\\n* The Poison Frogs attack is described in Section 2, marking as a drawback the fact that it only works with fine-tuning. It is not clear however why this is a limitation, as one could train the model with normal training and add the poisoned data in the last epochs. \\n\\n* In Algorithm 1, step 9: what is the update being performed? It seems to me that the pseudo-code does not capture the entire processing steps, hence making the whole work hard to reproduce.\\n\\n* Figure 2 shows gradient alignment along epochs (please report the axis labels), however it does not seem \\\"flat\\\" in the end, it is slightly decreasing. What happens if we increase the number of epochs? Will the alignment disappear?\\n\\n* Sometimes the reader's expertise is taken for granted (e.g. define \\\"unrolled gradient\\\"). This might make it difficult for the paper to reach a broader audience.\\n\\n* Eq. 1 shows no constraints on the data points staying in the feature space after perturbations. Is it considered during the experiments?\\n\\n* It is observed that VGG11 on CIFAR10 is less transferable, but it would be interesting to read a possible explanation for this phenomenon.\\n\\n* Equations should distinguish vectors from scalars to improve readability.\\n\\n* Figure 4 is unreadable as the text in labels and legends is too small.\\n\\n\\n** Comments after reading the authors' rebuttal **\\n\\nI would like to thank the authors for their clarifications. The threat model is now clearer to me - and I think it deserves clarifications in the paper as well.\\n\\nFirst of all, as far as I understand now, there's a net distinction between backdoors and clean-label attacks. Backdoor attacks assume that the attacker controls the design phase and the training process, and releases a backdoored model (which then someone else re-uses possibly with fine tuning). Hence, defenses against backdoors aim to detect whether models have been backdoored or not, and it is reasonable to expect that the defender doesn't know the training data as well as other design choices (as the attacker released the model). In this setting, clean-label attacks do not make sense (as the attacker controls the training labels too).\\n\\nClean-label attacks assume a different setup. Here the attacker only injects poisoning samples into the training set but does neither control the training process nor the training labels. Hence, clean-label attacks make sense in this setting. However, it also makes sense that the defender knows the training data (as the defender is the one that trains the algorithm, and the purpose is to either detect and remove the poisoning points or reduce their influence over training) - and hence I'm expecting the authors to do consider previous defenses that assume knowledge of the training set in their work.\\n\\nTo summarize, I think that:\\n\\n(1) the authors should clarify in the title that they restrict themselves to clean-label integrity/targeted poisoning attacks.\\n\\n(2) the authors should clarify the threat model, and clearly distinguish poisoning availability attacks (bilevel data poisoning) vs poisoning integrity attacks. Furthermore, in the poisoning integrity/targeted family, backdoor and clean-label attacks should be distinguished and the threat models clarified (in particular, w.r.t assumptions on what the attacker/defender know and have access to).\\n\\n(3) the authors should revise their sentences on the complexity of data poisoning (previous clean-label targeted attacks like poison frogs are not as complex as bilevel data poisoning attacks). A fairer comparison in terms of complexity should also be considered - how faster is this new attack w.r.t. poison frogs and the other clean-label targeted attacks? (poisoning availability should not be considered here as the goal is different in that case).\\n\\n(4) In general, there is need to disambiguate clean-label targeted poisoning attacks from the rest, and better position this work in context. Reading the paper in its current form, it seems that the authors are also able to improve scalability of poisoning availability attacks whereas this is not the goal of this work. \\n\\nI'm willing to revise my score if the authors agree on making these clarifications in the paper, better highlighting the net contributions of their work and the proper context of competing approaches (which do not include backdoors and poisoning availability attacks).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Blind review\", \"review\": \"## Summary\\n- The paper proposes a novel data poisoning attack i.e., to perturb a small fraction of images in the victim's training dataset so as to cause targeted misclassification on certain examples at inference time.\\n- The proposed approach works by perturbing the clean poison set to introduce a gradient direction which mimics the victim training their model on a targeted mislabeled set.\\n- Experiments on CIFAR10 and ImageNet demonstrate that the model outperforms competing approaches.\\n\\n---\\n\\n## Strengths\\n\\n**1. Attack insight**\\n- I appreciate the insight used to craft the perturbations for the poisoned instances. It seems reasonable to me to exploit the fact that the poisoned instances are used for training using a gradient-descent approach; using the gradient information as a yardstick to craft the perturbations is a nice insight that the paper leverages.\\n\\n**2. Thorough evaluation**\\n- I am impressed by the thoroughness in the evaluation. The authors extensively evaluate numerous factors influencing the performances (e.g., size of ensembles, no. of restarts), compare with recent baselines, etc.\\n- To add to it, the authors further evaluate on Imagenet and achieve strong results.\\n\\n**3. Writing**\\n- I enjoyed the writing in the paper. I found the presentation clear and easy to follow.\\n\\n---\\n\\n## Concerns\\n\\n### Major Concerns\\n\\n**1. Attack assumes access to exact training data**\\n- If I understand the approach correctly, it assumes access to the exact dataset used by the victim to train the model? Isn't this a really strong assumption?\\n- Because if this is the case in the threat scenario, couldn't the attacker simply poison the entire dataset?\\n- As a result, I wonder whether the attack also extends to the more interesting and practical case where the adversary has limited access to the victim's training set.\\n\\n**2. From scratch**\\n- At many times in the paper, the authors remark that the attack works in spite of the targeted model being trained from scratch from an unknown initialization.\\n- However, I would suspect that it is easier to tailor the poisoned instances with access to a strong gradient signal, such as early on during training. Are the authors aware whether the approach is robust to victim models that has been pretrained?\\n\\n### Minor Concerns\\n\\n**3. Test accuracies**\\n- Could the authors comment on the difference in victim's test-set accuracy training with the clean and poisoned training set? I found this largely missing, since the focus primarily seems to be on the accuracy on the target set.\\n- Because a minor concern I have is that the victim model might be overfitting to the poisoned instances by trading off test-set accuracy. It would be nice to know how severe this is.\\n\\n**4. \\\"single differentiation\\\"**\\n- The authors claim that the attack requires only a \\\"single differentiation\\\". But doesn't the model have to be twice differentiable ($\\\\nabla_x \\\\nabla_\\\\theta \\\\mathcal{L}(\\\\cdot)$) to perform the updates?\\n\\n\\n### Nitpicks\\n\\n**5. Strong focus on MetaPoison**\\n- The paper makes many head-to-head comparisons with MetaPoison. I'm not sure why, since MetaPoison doesn't seem too closely-related. Especially in S5.2, it appears that it is singled out to demonstrate the computational overhead. This is understandable since it's relies on a meta-learning approach.\\n\\n**6. Writing**\\n- Fig 4b is unreadable -- I recommend resizing the figure.\\n- It seems surprising that the related work section claims that poisoning attacks, unlike backdoor attacks, do not require access to test data. As I'm aware, both require the same access to test-set -- specifically that a particular test instance is presented at inference time to cause misclassification. In fact I would think backdoors are more generalizable here since any test instance can be watermarked to introduce misclassification, unlike pre-specified instances in the case of poisoning.\\n\\n### Post-rebuttal update\\nI thank the authors for their response -- this helps. Having read the other reviews, I am still leaning towards acceptance.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"New and practical targeted poisoning attack\", \"review\": \"This paper proposed a simple yet effective approach for data poisoning attack targeting a few \\\"clean-label\\\" victim images, using the idea of gradient matching (cosine similarity maximization) between the gradients of adversarial and clean losses. Although the attack model still requires knowing the network architecture (gray-box setting), the resulting poisoned datasets are more effective against different initializations, and some techniques (e.g. model ensemble, multiple restarts) are proposed to further boost the attack performance. The attack results are significantly better than the compared poisoning attacks, and the authors show effective attacks on the ImageNet dataset as well as Google Cloud AutoML with the poisoned data. The authors also discussed the proposed attack on some defenses, showing that the poison has limited change to feature distribution, and differential privacy can mitigate the attack but at the cost of reduced utility (clean accuracy).\\n\\nOverall, this paper shows some new insights and sets new benchmarks for targeted data poisoning attacks, with practical threat assessment on ImageNet datasets and Google Cloud AutoML, which I deem as a significant contribution. The proposed gradient matching is simple, intuitive, yet very effective. One limitation from Appendix A.8 is that the proposal may not scale well to more than 1 target image, as indicated by the rapidly decreasing attack accuracy. It will be more meaningful to control the effective budget/target and check the resulting accuracy of different number of targets, in order to understand whether gradient matching is scalable to multiple-target setting.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Important empirical work demonstrating real threat of poisoning attack on large-scale CNNs.\", \"review\": \"This paper presents a scalable data poisoning attack algorithm focusing on targeted attacks. The technique is based on gradient matching, where the intuition is to design the poisoning patterns such that their effect on the gradient of the training loss mimics the gradient as if the targeted test image is included in the training data.\\n\\nThe paper presents both theoretical intuitions behind the algorithm, as well as empirical reduction and simplification to make the algorithm scalable to ImageNet and applicable to even a black-box attack against the Google Cloud AutoML toolkit.\\n\\nThe algorithm proposed in this paper is practical and general, making it a realistic poisoning threat to modern deep learning systems. The presentation is clear and the theoretical justification is intuitive and easy to understand.\\n\\nOverall, I think this paper is a good contribution to the study of the large-scale poisoning attack.\", \"minor_typo\": \"In proof of Prop 1, you need the angle between the two gradients to be almost always smaller than 90 degrees, not 180 degrees.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
oFp8Mx_V5FL | Overfitting for Fun and Profit: Instance-Adaptive Data Compression | [
"Ties van Rozendaal",
"Iris AM Huijben",
"Taco Cohen"
] | Neural data compression has been shown to outperform classical methods in terms of $RD$ performance, with results still improving rapidly.
At a high level, neural compression is based on an autoencoder that tries to reconstruct the input instance from a (quantized) latent representation, coupled with a prior that is used to losslessly compress these latents.
Due to limitations on model capacity and imperfect optimization and generalization, such models will suboptimally compress test data in general.
However, one of the great strengths of learned compression is that if the test-time data distribution is known and relatively low-entropy (e.g. a camera watching a static scene, a dash cam in an autonomous car, etc.), the model can easily be finetuned or adapted to this distribution, leading to improved $RD$ performance.
In this paper we take this concept to the extreme, adapting the full model to a single video, and sending model updates (quantized and compressed using a parameter-space prior) along with the latent representation. Unlike previous work, we finetune not only the encoder/latents but the entire model, and - during finetuning - take into account both the effect of model quantization and the additional costs incurred by sending the model updates. We evaluate an image compression model on I-frames (sampled at 2 fps) from videos of the Xiph dataset, and demonstrate that full-model adaptation improves $RD$ performance by ~1 dB, with respect to encoder-only finetuning. | [
"Neural data compression",
"Learned compression",
"Generative modeling",
"Overfitting",
"Finetuning",
"Instance learning",
"Instance adaptation",
"Variational autoencoders",
"Rate-distortion optimization",
"Model compression",
"Weight quantization"
] | Accept (Poster) | https://openreview.net/pdf?id=oFp8Mx_V5FL | https://openreview.net/forum?id=oFp8Mx_V5FL | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"EncQeBK51jH",
"AE53A8QCcyO",
"qqvqrsiAsF",
"q4BgXVrZZP",
"iluKAYsQ4GW",
"1HyPNCscQS",
"AaOaYui8hV",
"9gd41VZl_rN",
"Q9z0tYEUzMZ",
"etuBJ7hjZK",
"r_mFr_zOCAA",
"UaD4r-rajDX",
"Z05kJM7Ecio",
"RTv80aNTt9E",
"HgNcFeMBAX",
"Ap52XlEYB0l",
"TREz-LxdknP",
"3ffXrnHKWD9",
"IsBt0RHOup",
"cYTAWjKYttx"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040351071,
1606305608723,
1606298859269,
1606297353512,
1605894587776,
1605894476669,
1605894379051,
1605718806025,
1605718536302,
1605716312202,
1605714571734,
1605713415674,
1605712415791,
1605711161215,
1605294332182,
1605280161520,
1603894091200,
1603881577747,
1603858786503,
1603563900767
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3658/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3658/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3658/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3658/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3658/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3658/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3658/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3658/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3658/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3658/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3658/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3658/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3658/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3658/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3658/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3658/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3658/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3658/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3658/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper suggests a procedure to efficiently adapting a learned neural compression model to a new test distribution. If this test distribution has low entropy (e.g., a video as a sequence of interrelated frames), large compression gains can be expected. To achieve these gains, the method adapts the decoder model to the new instance, transmitting not only the data but also a compressed model update. Experiments are carried out on compressing I-frames from videos, while comparisons comprise baseline approaches that finetune the latent representations of videos as opposed to the decoder.\\n\\nThe paper\\u2019s main contribution is very timely and relevant. While it was well-known in the classical compression literature that model updates could be sent along with the data (e.g., as already done in \\u201coptimized JPEG\\u201d), this is the first time the idea was implemented in neural compression. The experiments are arguably the paper\\u2019s weaker part and were originally a concern, but they have been significantly improved during the review period such that all reviewers voted for acceptance. We encourage the authors to further strengthen their experimental results by adding more challenging baselines on well-established tasks (e.g., image compression).\"}",
"{\"title\": \"Thanks for Reconsidering\", \"comment\": \"We thank the reviewer for reconsidering his/her review and giving additional feedback. We will update the paper (in case of acceptance) accordingly.\"}",
"{\"title\": \"Thanks for the thorough response.\", \"comment\": \"Thank you for responding to all my questions/concerns and improving the submission.\\nI have increased my score in light of the substantial improvement to the method and experiments.\", \"a_few_nitpicks_for_the_updated_manuscript\": \"1. there was never a specification of what \\\"quantization-aware training\\\" is, so it's not clear how the ablations (II, III) actually remove the \\\"quantization-aware training\\\" component of the method;\\n2. there's a broken reference \\\"discussion in Section ??.\\\" at the end of section 1.\"}",
"{\"title\": \"Revision Summary\", \"comment\": [\"As today the rebuttal period ends, we would finally like to thank all reviewers for their time and constructive feedback, which greatly improved the quality of our work. To summarize; with respect to the initial submission, the following changes were made:\", \"We changed our experiments to a realistic setting where we finetune an I-frame compression model on a set of I-frames (sampled at 2 fps) for various videos, resulting in a considerable gain of (on average) 1 dB at the same bit rate.\", \"We updated our model prior to a spike-and-slab prior (p. 4, Section 3.2), such that the model itself can learn which parameters are worth the update and which aren't (and are therefore negligibly cheap to encode).\", \"We added two ablation experiments: one that separately quantifies the performance gains thanks to quantization- and model rate-aware finetuning, and one that investigates the influence of the number of I-frames on final performance (including the image-compression setup where we finetune on a single I-frame). The first ablation shows that both quantization- and model rate-aware finetuning greatly improve the compression performance (p. 8, Fig. 2c; p. 19 Fig 10), while the second ablation demonstrates that full-model finetuning with the spike-and-slab model prior works well for a wide range of number of frames (p. 17, Appendix D, Fig. 7).\", \"We added an extra baseline (p. 8, Fig. 2a); direct latent optimization (Campos et al., 2019), and we now report for four different beta values (rather than three in the initial submission).\", \"We hope that these updates take away the reviewers' original concerns.\"]}",
"{\"title\": \"Added Temporal Ablation Experiment\", \"comment\": \"The initial review of the referee asked whether we could explain the relatively low performance of encoder-only finetuning. As was already suggested by the referee back then (and acknowledged by us in the reply), the encoder/latents are more difficult to be finetuned when the prior and decoder model are frozen. This hypothesis is confirmed by [our new ablation experiment](https://openreview.net/forum?id=oFp8Mx_V5FL¬eId=AaOaYui8hV) in Appendix D (Fig. 7), which shows that full-model finetuning is able optimize rate (for low bit rate regime), and distortion (for high bit rate regime) much more than encoder/latent-only finetuning.\"}",
"{\"title\": \"Added Temporal Ablation\", \"comment\": \"We thank the reviewer for the comment regarding publishing negative results. We slightly extended this idea by doing an ablation experiment in which we varied the number of I-frames on which we finetuned ([see our reply in the general thread](https://openreview.net/forum?id=oFp8Mx_V5FL¬eId=AaOaYui8hV)).\"}",
"{\"title\": \"Added Temporal Ablation\", \"comment\": \"Some of the reviewers asked us to perform an image-compression experiment. Though we [explained why full-model finetuning is non-beneficial for (single) image compression](https://openreview.net/forum?id=oFp8Mx_V5FL¬eId=RTv80aNTt9E), AnonReviewer 2 noted that it would still be valuable to include such (possibly negative) results in the paper.\\n\\nWe definitely agree with publishing negative results, and therefore have now updated our manuscript with a temporal ablation, where for one video and two values of $\\\\beta$ we repeat our main experiment for different numbers of frames sampled from the video.\\n\\nWe show that full-model finetuning outperforms the encoder/latent-only finetuning methods, even for a really low number of frames. Full-model finetuning is found to be too costly when finetuning on 1 frame at the lowest bitrate. We believe that the added ablation study is of strong interest to the compression community as it clarifies the current boundaries of full-model finetuning. \\n\\n~~_Note: As the experiments for the temporal ablation are not yet finished, we only used data from the first 30.000 finetuning steps to create Figure 7. This is the largest number of steps for the slowest run, ensuring a fair comparison at this moment. In case of acceptance we will update the figure with results after training for 100.000 steps (our default). Since most improvements are achieved in the early stages of training (see Figure 2b) we do not expect the results to change meaningfully._~~\"}",
"{\"title\": \"Response to AnonReviewer3 (2/2)\", \"comment\": \"> \\\"The use of the continuous density for the M (model update cost) term in Eq 2 is established in Appendix A by showing that the gradient of the discrete cost \\\\bar M has the same gradient (up to first order) as that of -log p(\\u03b4) based on the density p(\\u03b4). Did I understand this correctly? But M = -log p(\\u03b4) doesn't actually give an estimate of the cost after discretization \\\\bar M = -log p[\\\\bar \\u03b4]. Instead, the typical thing to do in literature (due to Balle et al.) is to actually minimize -log p[\\\\bar \\u03b4], where \\\\bar \\u03b4 = round(\\u03b4), and the rounding can be either approximated by uniform noise injection or STE. Can the authors comment on this choice of their method? \\\"\\n\\nWe thank the referee for this interesting question. We confirm his/her understanding of Appendix A. Indeed the gradient of the continuous model rate penalty is (up to first order) equivalent to the gradient of its discrete counterpart, and indeed, the continuous penalty M does not give an estimate for the number of bits to be paid for the model update costs. This mismatch is caused due to a bias present between the number of bits and its continuous measure. Though, realize that a bias does not affect optimization behavior as it leaves the gradient unaffected, and thus gradient-based optimization as well. The fact that the continuous model rate costs itself are thus not a proxy for the actual number of bits to be paid does not matter while finetuning the model, as only the gradient is important to be a valid proxy.\\n\\nThe referee proposes the use of the discrete model rate costs during training, including the Straight-through estimator to enable gradient updates. We indeed agree that the discrete bit rate overhead (as presented in App. A2, Fig. 4 (bottom)) could have been used for finetuning, as we indeed applied the Straight-through estimator to compute this gradient.\\nHowever, empirically we found the influence of finetuning with either the discrete or continuous model rate penalty negligible and therefore chose to adopt the continuous penalty as it might prevent unstable gradients that constantly switch (when being on the boundary between two quantization bins) during finetuning. Additionally, after updating the manuscript to use the proposed spike-and-slab prior ([see our reply from 13 November](https://openreview.net/forum?id=oFp8Mx_V5FL¬eId=Ap52XlEYB0l)), we extended Appendix A2 with a figure (orange in Fig. 4) showing the effect of the spike on the gradient. From that figure we see how the spike's effect on the gradient is almost fully canceled out due to quantization. This provides an extra reason why we finetune with the continuous regularizer.\\n\\n> \\\"Typos and minor mistakes/fixes:\\\"\\n\\np. 2, under eq (1): The R-D loss is equivalent to the negative ELBO in VAEs;\\nWe thank the reviewer for this remark and changed it in the updated manuscript.\\n\\n> Does Figure 3 bottom show the histogram of bit allocation for \\\\bar \\u03b4? If so then the caption can just say \\\"Bottom: histogram of bit allocation for \\\\bar \\u03b4\\\" as it's clearer.\\n\\nIndeed the bottom row in Fig. 3 shows how much bits are being paid per update level $\\\\bar{\\\\delta}$. We followed the referee's advice and changed the caption of this figure.\\n\\n### References\\n- Joaquim Campos, Simon Meierhans, Abdelaziz Djelouah, and Christopher Schroers. Content adap- tive optimization for neural image compression. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition Workshops, pp. 0\\u20130, 2019.\\n- Yibo Yang, Robert Bamler, and Stephan Mandt. Improving inference for neural image compression.\"}",
"{\"title\": \"Response to AnonReviewer3 (1/2)\", \"comment\": \"We thank the reviewer for his/her time to review our work. The raised concerns are answered below:\\n\\n> \\\"It's unclear from the description if the evaluation on UVG actually \\\"adapts the entire model to a single data instance\\\" (i.e., for each image) as claimed, or amortizes the model update cost over a batch of all the images in a video. The paper claims that \\\"In this paper we consider the extreme case where the domain of adaptation is a single instance, resulting in costs for sending model updates which become very relevant\\\", but this would highly misleading if all the experiments were conducted in a batch compression setting.\\\"\\n\\nWe agree with the referee that our initial formulation facilitated mis-interpretation. As explained in our reply in the general thread, we now changed to a realistic I-frame setup in which the model is adapted to a set of I-frames from one video (and amortize of all these frames). We rephrased the formulation in the paper to be more clear on the definition of one instance in our setup, and thereby hope to have taken away the reviewer's concern.\\n\\n\\n> \\\"Since the paper's contribution is about improving the existing fine-tuning strategy that tackles model update quantization after fine-tuning (e.g., Zou et al., 2020), the proposed method should then also compare to these baselines to really assess its performance.\\\"\\n\\nWe agree with the reviewer that the original manuscript lacked evidence for the proposition that quantization-aware finetuning improves compression performance. We therefore added an ablation study in which we show that both quantization- (and model rate-) aware finetuning greatly improves performance. \\n\\n\\n> \\\"It would also be interesting to compare with approaches that optimize the encoded latents (e.g., Yang et al., 2020), which also achieve close to 1 PSNR improvement at equal bitrate without the overhead of decoder updates.\\\"\\n\\nWe also agree with the referee that a baseline was missing in which the latents are optimized directly (as in Campos et al, 2019 & Yang et al., 2020). As such, we updated our experimental section with comparison to latent-only finetuning, which is shown to perform similarly to encoder-only finetuning. \\n\\nThe additional framework-agnostic improvements proposed by Yang et al. (2020) (e.g. bits-back coding) in order to achieve a final gain of 1 dB, can in future research be added to our novel concept of full-model finetuning as well. In order to make a clean and fair comparison, we thus compare to latent-finetuning only without the additional improvements proposed.\\n\\n> ### Questions:\\n> \\n> Can the author comment on how \\\"the quantization bin width t and standard deviation \\u03c3 of $p[\\\\bar{ \\\\delta}]$\\\" (Sec 4.3) are chosen? How sensitive is the compression performance to their choice,\\n\\nBoth the quantization bin width $t$ and standard deviation $\\\\sigma$ were empirically chosen, without major tuning. We initially run a naive, unregularized finetuning experiment to see in which order of magnitude the parameter updates would be distributed. Setting sigma=0.05 seemed to be an appropriate choice, which we did not tune further ever since. Thereafter we heuristically set the quantization bin width a factor 10 lower to 0.005. We additionally tested with a quantization bin width of 0.01 and found low sensitivity to this change in value.\\n\\n> is it possible to discretize so finely that no amount of RD improvement can overcome the model update cost?\\n\\nIndeed quantization can be so finely that the number of bits needed to encode each quantized update is so large that the resulting model update costs can not be overcome by an increase in RD performance. In this situation, when optimizing the RDM loss, no parameters will be finetuned (e.g. $\\\\delta$ will remain $\\\\mathbf{0}$) . As a consequence, the finetuned model will have identical distortion but $\\\\bar{M}_0$ added to the rate. Thanks to our currently employed spike-and-slab prior, these static costs are rather small, and looking at the plots that indicate the finetuning progression over time (Appendix D, Fig. 8), we can also see that a net RD gain is being achieved directly at the start of finetuning.\"}",
"{\"title\": \"Thanks for reconsidering\", \"comment\": \"We thank the reviewer for reconsidering his/her review and posting a reply again. We will answer the remaining concerns below:\\n\\n> ### Quality\\n> ...\\n\\nWe agree with the referee that our initial adoption of the I-frame model (i.e. all frames were I-frames) comprised a weak baseline. However, as already acknowledged by the reviewer, we now changed to a realistic setup, in which we sampled the I-frames at 2 frames per second as is common in actual video compression systems. \\n\\n\\n> \\\"A reader familiar with compression will be very well aware that a neural decoder could be included in the bit-stream, making the conceptual contributions less interesting.\\\"\\n\\nThe reviewer mentions that \\\"a reader familiar with compression will be very well aware that a neural decoder could be included in the bit-stream\\\". We indeed agree that familiar readers will be aware of the fact that inclusion of an updated model in the bitstream is a possibility. However, doing this while not greatly increasing the resulting rate is highly non-trivial, as can be seen by the fact that related work often focuses on encoder-only (Aytekin et al. (2018) & Lu et al. (2020)) or latent-only finetuning (Campos et al. (2019), Yang et al. (2020), Gou et al. (2020)), or only finetuning a (small) part of the decoding model (Lam et al., 2019;2020, Klopp et al. (2020)), rather than the full model. To the best of our knowledge, finetuning an entire neural network model (and showing larger RD gains), has never been done before.\\n\\n> \\\"If model complexity was a concern, the authors could have evaluated their approach on images instead of videos. The results would have looked less impressive but would have been more useful.\\\"\\n\\nThis original remark is close to the newly made request to report negative results on the use case where each instance is exactly one image/frame. We agree that reporting such negative results is in fact of interest to the reader of this paper. We aim to update the paper with such results. Depending on the computational resources we will have available the coming days, we hope to finish these experiments before the end of the rebuttal period.\\n\\n> \\\"Alternatively, they could have chosen a different video compression architecture of low complexity but one which is still practically relevant. E.g., one motivated by computational constraints.\\\"\\n\\nTo respond to the raised concern, we'd like to refer to [our earlier answer in the general thread](https://openreview.net/forum?id=oFp8Mx_V5FL¬eId=RTv80aNTt9E), explaining why moving to a lower-complexity video compression model might not be so trivial. \\n\\n> ### Significance (4/10)\\n> ...\\n\\nWe hope that the previous answers have clarified that we mainly foresee big opportunities for full-model finetuning for video compression. The reported results show how our full-model finetuning framework has merit to greatly improve the sub-problem of I-frame compression in video compression.\\n\\n> ### Originality (4/10)\\n> Including model information in the bit-stream is an old idea in compression and not limited to neural compression. For example, Netflix is optimizing their classical video codecs at a \\\"shot\\\" level. Even JPEG (1992) allows us to fine-tune the Huffman table for an individual image (\\\"optimized JPEG\\\"). \\n\\nWe agree with the referee that the idea of including model information to the bit stream is not novel, and we did not intent to claim this. We already acknowledged related work that also finetuned (parts) of the decoding model (Lam et al., 2019;2020, Klopp et al., 2020). Yet, extending the bit stream with information regarding *full-model* updates is novel, and has to the best of our knowledge never been done before. \\n\\n> It is also common for compression challenges to require the model to be included in the bit-stream (e.g., the Hutter prize or the P-frame challenge of CLIC 2020).\\n\\nWe acknowledge the reviewer's remark. However the concept is not equivalent. The Hutter prize deals with language models, which are conceptually different from video/image models, and in the P-frame challenge of CLIC2020 the entire model size is added to the compressed data size. The goal of such a model-size penalty is to promote small model designs over ever-growing (heavily over-parameterized) neural networks. In our use case we do not necessarily want to limit the total size of the model, yet we are interested in restricting the model updates under a given model prior used for entropy coding these updates.\\n\\n\\n> Many papers have been written on the related topic of model compression (e.g., Han et al., 2016), which should at least be acknowledged. Compressed model updates are also used in parallelized implementations of SGD (e.g., Alistarh et al., 2017).\\n\\nWe agree with the referee that we lacked references to works in the model compression literature. As such, we have extended our related work section with a paragraph dedicated to model compression research, acknowledging (among others) the proposed references by the reviewer.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank the reviewer for this positive and constructive review. We are pleased to read that the reviewer appreciates Fig. 2b and 3 specifically, as we indeed added those to provide the reader with insights behind the final results.\\n\\n### Remarks on encoder-only performance\\n\\nWe agree with the interesting observation the referee makes regarding encoder-only finetuning. When we started this research, we initially investigated how (naive, unregularized) finetuning of different subsets of the model (e.g. encoder+prior or encoder+decoder) affected performance. We quickly noticed that best results were found when both (a subset of) prior and decoder were finetuned. Finetuning (part of) the prior (on top of encoder finetuning) faciliated reduction in rate, while finetuning (part) of the decoder reduced distortion. In order to achieve best RD performance, we thus concluded that full-model finetuning is definitely desirable. And indeed, we share the referee's hypothesis that only finetuning the encoder parameters (or latents directly) is limited by the fact that the latent prior is frozen.\\n\\n### Learned model prior\\n\\nWe find it interesting to read that the reviewer is as curious as we are to see how performance will benefit from using a learned prior, rather than a fixed Gaussian. A natural extension would indeed be to jointly train the standard deviation $\\\\sigma$ and/or quantization bin width $t$ per parameter, while still restricting ourselves to Gaussian priors. We however foresee a situation where both $\\\\sigma$ and $t$ collapse to extremely small values, resulting in a prior with (almost) zero-entropy. This would result in the initial costs $\\\\bar{M}_0$ being zero and is therefore a trivial solution in which the model could easily collapse, making this natural extension possibly less trivial than expected.\\nWhen moving to more complex and highly parameterized learned model priors, the question arises whether its parameters are fitted to a data instance or to a dataset of instances. In the first case, we need to signal its parameters in the bitstream which would likely be costly. When the prior is fitted over a dataset of instances, training might be expensive and there are no guarantees that the prior would generalize to unseen instances.\\n\\nThe previous reasoning made us belief that the use of learned priors for our model prior can best be investigated in a separate, future research. Besides, we belief that we present an elegant and simple concept that already provides considerable gains. The fact that this framework already works using such a naive model prior, opens up a whole new field for future research in neural data compression.\\n \\n### Quantization\\n\\nThe suggestion of the reviewer to use quantization bins of equal mass rather than bins with uniform spacing is indeed interesting. It would increase the support for large model updates, possibly without (a large) increase in model rate, and it faciliates finer quantization for the small updates. As we also wanted to improve upon the relatively large initial static costs in our revised manuscript, we upgraded our model prior to a spike-and-slab prior (see [our response of 13 Nov](https://openreview.net/forum?id=oFp8Mx_V5FL¬eId=Ap52XlEYB0l)). To not induce multiple changes at the same time, we leave the extension to use equal-mass quantization bins for future research.\"}",
"{\"title\": \"Response AnonReviewer4\", \"comment\": \"> \\\"Method has only been evaluated with respect to its own baseline method (image compression model without finetuning)\\\".\\n\\nWe follow the reviewer's advice, and now implemented direct latent optimization as proposed by Campos et al. (2019), and later used as well by Yang et al. (2020) and Guo et al. (2020), next to the already present encoder-only finetuning baseline. Besides, the non-finetuned baseline model is the de-facto neural image compression standard nowadays (see [our general reply from 13 November](https://openreview.net/forum?id=oFp8Mx_V5FL¬eId=Ap52XlEYB0l)), making it another valid baseline to show the merit of this new concept in our opinion.\\n\\n> \\\"Method has only been evaluated on one video dataset, but by compressing frame by frame, therefore not taking advantage of temporal redundancy.\\\"\\n\\nWe indeed show results of our framework on one dataset, but one should realize that in typical machine learning setups, one dataset entails one training where the model learns to capture the statistics of this dataset. In our case, for each video in this dataset a new model is being finetuned, making each video an experiment on its own, as each video's characteristics differ. We've changed to the Xiph dataset (see [our reply from 13 November](https://openreview.net/forum?id=oFp8Mx_V5FL¬eId=Ap52XlEYB0l)), and the selected videos vary in many aspects including framerate, camera used for shooting, single-shot vs multi-shot and clip content.\\n\\n\\nThe remark regarding ignoring temporal redundancy has been raised by multiple reviewers. In response, we now changed our all-intra frames setup (i.e. all frames are I-frames), to a realistic use case of I-frame compression at 2 fps. For more details we refer again to [our general reply from 13 November](https://openreview.net/forum?id=oFp8Mx_V5FL¬eId=Ap52XlEYB0l).\\n\\n\\n> \\\"Given that it is an image compression method, the proposed instance adaptive method could also be evaluated on the e.g. clic validation set.\\\"\\n\\nWe thank the reviewer for the suggestion. As AnonReviewer3 made the same remark as an answer to our general reply, our answer [can be found there](https://openreview.net/forum?id=oFp8Mx_V5FL¬eId=RTv80aNTt9E).\\n\\nSome open questions\\n\\n> \\\"Is $\\\\bar{M}$ computed for the whole video and averaged per frame for the results in Table 1 and therefore dependent on the length of the video?\\\"\\n\\nThe referee indeed understood correctly that $\\\\bar{M}$ was (in the original version of the paper) computed over the whole video, as finetuning also took place on the entire video. As we now overfit the full model on I-frames from videos that are only sampled at 2 fps (see our reply in the general thread), the model rate costs are also amortized over only these frames. We belief we made this more clear in the updated version of the manuscript. In Table 1 we initially provided the costs of $\\\\bar{M}$ both in bits/pixel and bits/parameter. The former thus averages the costs over the pixels of all I-frames, making it dependent on the number of frames. The latter is dependent on the number of trainable parameters in the model and thus depends on the chosen model architecture. Upon this remark of the reviewer, we realized that it might also be of interest to the reader to see model rate expressed in bits or bytes per frame. As such, we extended Table 1 with an expression of M in this unit as well.\\n\\n\\n> \\\"Do the authors have some intuition, why some videos are easier to finetune than others?\\\"\\n\\nWe thank the reviewer for this interesting question, indeed finetuning gain differs among videos. Note that the performance of the global model for each video differs already, therewith influencing the maximum gains to be achieved by finetuning. Also, video characteristics such as motion and frequency content greatly influence the diversity of the set of I-frames, thereby affecting the ease of model-adaption. \\n\\n> \\\"References of arxiv papers, which have been published before submission deadline, can be updated with the respective conference. \\\"\\n\\nWe thank the reviewer for this comment and updated all references with the appropriate conference or journal where possible.\\n\\n**References**\\n- Joaquim Campos, Simon Meierhans, Abdelaziz Djelouah, and Christopher Schroers. Content adap- tive optimization for neural image compression. In Proceedings of the IEEE Conference on Com- puter Vision and Pattern Recognition Workshops, pp. 0\\u20130, 2019.\\n- Yibo Yang, Robert Bamler, and Stephan Mandt. Improving inference for neural image compression. Advances in Neural Information Processing Systems, 33, 2020b.\\n- Tiansheng Guo, Jing Wang, Ze Cui, Yihui Feng, Yunying Ge, and Bo Bai. Variable rate image compression with content adaptive optimization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 122\\u2013123, 2020.\"}",
"{\"title\": \"Updated experiments more interesting\", \"comment\": \"The updated 2fps I-frame experiment is more realistic and thus provides more meaningful results. I still think the experimental section could have been stronger. For example, why not include results on single images? Even if they are negative, they would have increased the paper\\u2019s value to the community. Nevertheless, I increased my score from 4 to 6.\"}",
"{\"title\": \"Our new I-frame compression setup is a realistic and common use-case in video compression.\", \"comment\": \"We thank the reviewer for his/her quick reply and would like to take the opportunity here to clarify the scope of our experiments in relation to the real-world use case of I-frame compression in video compression. Typical video compression comprises of independent compression of key frames (I-frames), followed by conditional compression of the remaining frames. For example Lu et al. (2019), Liu et al. (2020), Wu et al. (2018), Djelouah et al. (2019), and Yang et al. (2020a) also all compress every 8th-12th I-frame independently. In this work we specifically tackle this I-frame compression subproblem of video compression. Since we show 1 dB compression gains on this task, the next step would be to finetune the P-frame (Lu et al. (2019), Liu et al. (2020), Yang et al. (2020a)) or B-frame (Wu et al. (2018), Djelouah et al. (2019)) model on the other frames by minimizing the $RDM$ loss amortized over those frames. We leave this as an exercise for future work.\\n\\nEven though we focus on the problem of I-frame compression, this is not the same as image compression. When applying our method for image compression, we would finetune a model for each image in a dataset and amortize the model rate $M$ over the number of pixels in that image. The high number of parameters per pixel would make it very difficult to reach good compression performance when taking into account the model rate.\\n\\nInstead, we want to amortize the cost of finetuning the model over multiple images or frames. In batch-image compression, a batch of various (uncorrelated) images is to be compressed jointly. This leads to a very specialized and uncommon use-case, as it requires knowledge regarding the exact set of images the user wants to be compressed.\\nThe problem of video compression lends itself very naturally for full-model adaptation, as a user typically wants to receive the entire video.\\n\\nAs we also indicated in our discussion section, we agree that leveraging low-complexity video models is an interesting application of our full-model finetuning framework. However, the necessary neural architecture search to find such a model (which has high enough capacity to adapt to a full video, but is at the same time small enough), results in a non-trivial problem which we leave for future research. On the contrary, we chose a state-of-the-art I-frame compression model to showcase our framework. \\n\\nAs mentioned before, we now changed our all-intra frames setup (i.e. all frames are I-frames), to a realistic use case of I-frame compression at 2 fps. We have also updated our paper with more extensive explanations regarding our choice for the I-frame video compression use case, and therefore hope to take away concerns using the scope of this work.\\n\\n\\n**References**\\n- Yang Yang, Guillaume Sauti`ere, J Jon Ryu, and Taco S Cohen. Feedback recurrent autoencoder. InICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing(ICASSP), pp. 3347\\u20133351. IEEE, 2020a.\\n- Guo Lu, Wanli Ouyang, Dong Xu, Xiaoyun Zhang, Chunlei Cai, and Zhiyong Gao. Dvc: An end-to-end deep video compression framework. InThe IEEE Conference on Computer Vision andPattern Recognition (CVPR), June 2019.\\n- Haojie Liu, Han Shen, Lichao Huang, Ming Lu, Tong Chen, and Zhan Ma. Learned video compres-sion via joint spatial-temporal correlation exploration. InProceedings of the AAAI Conference onArtificial Intelligence, volume 34, pp. 11580\\u201311587, 2020.\\n- Abdelaziz Djelouah, Joaquim Campos, Simone Schaub-Meyer, and Christopher Schroers. Neuralinter-frame compression for video coding. InProceedings of the IEEE International Conferenceon Computer Vision, pp. 6421\\u20136429, 2019.\\n- Chao-Yuan Wu, Nayan Singhal, and Philipp Krahenbuhl. Video compression through image inter-polation. InProceedings of the European Conference on Computer Vision (ECCV), pp. 416\\u2013431,2018.\"}",
"{\"title\": \"Not sure about doing evaluation on another video dataset...\", \"comment\": \"Thanks for this detailed update and addressing some of our common concerns.\", \"i_have_one_quick_suggestion\": \"as reviewer 2, reviewer 4, and I have already pointed out, since the model and evaluation setup are mainly targeted at (batch) image compression (and considered naive and impractical for video compression), it would be really helpful to also see (batch) image compression results on standard image datasets like Kodak, Tecnick, or CLIC validation set.\\nAlternatively, if the focus is really on video compression, then working with say a lower-complexity (e.g., computationally constrained) yet still practically useful video compression model would also make the results more meaningful, as suggested by reviewer 4.\"}",
"{\"title\": \"Manuscript Updates\", \"comment\": \"We thank all reviewers for the time to review our work and for the constructive feedback. Analyzing the remarks that were shared across reviewers, we decided upon improving the paper by four main updates, discussed below. Reviewer-specific remarks and our responses to those will be addressed below each review separately.\\n\\n## Baseline model\\nMultiple reviewers found the presented baseline naive or unrealistic. Two reviewers address that temporal redundancy in videos is not taken into account in our model as we use an I-frame model where each image from a video is independently compressed. We acknowledge that the adopted I-frame frequency of 120 fps (i.e. all frames are I-frames) does not resemble a typical video compression setup, as in practice each second typically comprises only one or two I-frames. These I-frames are then still independently compressed to enable random access at any point in time. As such, we decided to change our setup to a more realistic setting where we independently compress I-frames sampled at 2 fps. \\n\\nWe would like to remark that the chosen mean-scale hyperprior model (without autoregressive context-model) (Ball\\u00e9 et al., 2018; Minnen et al., 2018) is the de-facto standard for neural image compression (Yang et al., 2020; Agustsson & Theis, 2020; Chen & Ma, 2020) and is also commonly used in video compression works where I-frames are compressed using a neural network (Agustsson et al., 2020; Djelouah et al., 2019; Lu et al., 2019). We benchmarked our implementation of this model against the reported performance in the original paper and could reproduce their performance, providing support that our implementation of this standard for I-frame compression can be used as a valid and near state-of-the-art baseline model.\\n\\n## Dataset\\nWe noticed that some confusion was present about the amortization of the additional model update costs. In the original submission, we amortized the bit rate overhead over all frames of the video, as we were also finetuning the model on the entire stack of frames. For the UVG dataset this comprised 600 frames (5 seconds, sampled at 120 fps). However, as explained in the previous paragraph, the updated experiments will only compress two frames per second from each video. As the bit rate overhead is amortized over the resulting total number of I-frames, we decided to move to the [Xiph dataset](https://media.xiph.org/video/derf/) that contains longer videos (10-20 seconds). An additional benefit of Xiph over UVG is its increased variety of video characteristics.\\n\\n## Model prior\\nAlthough optimization using our proposed $RDM$ loss automatically trades off the model update costs against the $RD$ improvements, the initial costs $\\\\bar{M}_0$ are not part of this optimization. By switching to a realistic I-frame frequency, the total number of frames for amortization of the bit rate overhead is heavily reduced, and the static initial costs are rather large. \\nWe therefore updated our model prior by increasing the probability mass for zero-updates ($\\\\bar{\\\\delta}=0$) by adding a narrow Gaussian around this zero-update. In the revised manuscript we show that this updated prior is a generalization of the earlier proposed Gaussian model prior, and that the resulting updates are sparser. Finetuning with this new prior works well for both short and long videos, making the proposed method more generally applicable.\\n\\n## Benchmarking\\nWe were asked to compare to methods that apply post-finetuning quantization (Lam et al., 2020, Zou et al., 2020), as one of our main contributions is quantization-aware finetuning. We agree that our initial manuscript lacked experimental evidence showing that quantization-aware finetuning indeed improves final compression performance. As such, we will update the manuscript with an ablation experiment to quantify this quantization gap. Our results show that this gap is substantial; supporting our claim that quantization-aware training improves compression performance.\\n\\n### References\\n - Eirikur Agustsson and Lucas Theis. Universally quantized neural compression. \\n - Eirikur Agustsson, David Minnen, Nick Johnston, Johannes Balle, Sung Jin Hwang, and George Toderici. Scale-space flow for end-to-end optimized video compression.\\n - Johannes Ball\\u00e9, David Minnen, Saurabh Singh, Sung Jin Hwang, and Nick Johnston. Variational image compression with a scale hyperprior.\\n - Tong Chen and Zhan Ma. Variable bitrate image compression with quality scaling factors.\\n - Abdelaziz Djelouah, Joaquim Campos, Simone Schaub-Meyer, and Christopher Schroers. Neural inter-frame compression for video coding.\\n - Guo Lu, Wanli Ouyang, Dong Xu, Xiaoyun Zhang, Chunlei Cai, and Zhiyong Gao. Dvc: An end-to-end deep video compression framework.\\n - David Minnen, Johannes Ball\\u00e9, and George D Toderici. Joint autoregressive and hierarchical priors for learned image compression.\\n - Yibo Yang, Robert Bamler, and Stephan Mandt. Improving inference for neural image compression.\"}",
"{\"title\": \"Instance specific finetuning method for image and video compression but with weaknesses in the experimental section\", \"review\": \"**Summary**\\n\\nThe paper describes an instance specific finetuning method for image and video compression including finetuning the decoder. Based on the shown experiments, the required additional bits for sending the updated finetuned model parameters are worth the achieved increase in RD performance. However, the method has only been evaluated on one video dataset and with respect to its own baseline and not with respect to any other existing method.\\n\\n**Strength**\\n\\n= Method which also considers to finetune/adapt the decoder side of image compression network, for improved performance. \\n\\n= Paper is self-contained by recapping the necessary basic formulations.\\n\\n**Weakness**\\n\\n= Method has only been evaluated with respect to its own baseline method (image compression model without finetuning).\\n\\n= Method has only been evaluated on one video dataset, but by compressing frame by frame, therefore not taking advantage of temporal redundancy. \\n\\n= Given that it is an image compression method, the proposed instance adaptive method could also be evaluated on the e.g. clic validation set.\\n\\n*Some open questions*\\n\\nIs $\\\\bar{M}$ computed for the whole video and averaged per frame for the results in Table 1 and therefore dependent on the length of the video?\\n\\nDo the authors have some intuition, why some videos are easier to finetune than others?\\n\\n*Minor*\\n\\nReferences of arxiv papers, which have been published before submission deadline, can be updated with the respective conference.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A nice idea, well communicated\", \"review\": \"This paper investigates how to improve the test time performance of learned image compression models through finetuning of the full model. The authors finetune the model (both the model parameters and the prior on the latent space) for every test-time instance, appending the model updates to the bitstream. The model updates are coded according to a discretised, mean-zero Gaussian distribution with a single learned variance. They demonstrate that this approach yields a superior rate-distortion curve than the non-finetuned model on a set of I-frame video data.\\n\\nOverall I like the paper. It is clearly and simply written, with good motivation given for the concepts introduced. The method itself is also straightforward to understand, and seems like a sensible approach. Although on the surface the idea of doing instance-specific fine-tuning might seem to be impractical, it benefits from the fact that the extra encoding time of fine-tuning the model is paid by the sender. The receiver only has to pay the extra cost of decoding the model updates, which is fast if the coding distribution is factored (as it is in this paper). These asymmetrical coding times are often acceptable, as the authors note, since encoding-decoding is often a one-to-many relation.\\n\\nI think the results demonstrated by the method are positive enough to warrant the extra overhead introduced, with a ~1dB gain for a given bitrate. I also appreciate the breakdown of where the extra model delta bits are allocated as per Figure 3, and the visualisation of the training performance in Figure 2b. I think these give a nice feel for the way the method works and the finetuning progresses on this particular instance.\\n\\nDo the authors have any comment on why the encoding-only finetuning yields barely any benefit, as shown in Figure 2a? My interpretation might be that finetuning only the encoder is sub-optimal because the latent prior is fixed. The prior will have been learned jointly with the encoder on the global model, such that the encoder maps to parts of the latent space that the prior assigns mass to. As such, if the prior is fixed and you then finetune the encoder, the encoder still has to map to parts of space that are assigned mass in order to avoid the rate becoming too large. It might be interesting to see the results if the encoder and prior are finetuned but not the decoder. Although if you are finetuning (and communicating side information for the prior updates) then it is probably very little extra cost to also update the decoder. The results also seem to indicate that most bits for the model updates are spent on the decoder weights.\\n\\nI also think it would have been good to include results using a learned prior to code the model updates, not a Gaussian. The authors do mention this as a possibility in the discussion, but surely it would have been very easy to implement? Given that they are already doing so for the latent space itself. Another small point about the Gaussian quantisation, is that an alternative discretisation is that of assigning equal mass to all bins, as per https://arxiv.org/abs/1901.04866 (see Appendix B). This results in simple coding - the discrete distribution is uniform, since the bins all have equal mass - and ensures that the discretisation is appropriate for the Gaussian.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good idea and sound method, but experiments can be better executed.\", \"review\": \"This paper considers the problem of per-instance model adaptation for neural data compression, and proposes a new method for end-to-end finetuning the model that is quantization-aware, by introducing an additional term that measures the compression cost of model update to the typical rate-distortion loss. Evaluation on the UVG dataset shows encouraging performance, with an average distortion improvement of approximately 1 dB for the same bit rate compared to the naive baseline (without fine-tuning).\\n\\n------------------------\", \"pros\": \"1. The paper is well written and concepts are clearly explained.\\n2. The method is sound, and incorporating the entropy cost of model update during fine-tuning offers a conceptually appealing (and likely more performant, though not empirical verified (see below)) approach compared to previous methods (Lam et al., 2020, Zou et al., 2020) that tackles model update quantization after fine-tuning.\\n\\n------------------------\", \"cons\": \"The experiment section is the weakest point. Particularly:\\n1. It's unclear from the description if the evaluation on UVG actually \\\"adapts the entire model to a single data instance\\\" (i.e., *for each image*) as claimed, or amortizes the model update cost over a batch of all the images in a video. The paper claims that \\\"In this paper we consider the extreme case where the domain of adaptation is a single instance, resulting in costs for sending model updates which become very relevant\\\", but this would highly misleading if all the experiments were conducted in a batch compression setting.\\n2. If the experiment did perform per-instance model adaptation, then it would be much more convincing to evaluate on standard datasets like Kodak and Tecnick from the image compression literature, instead of frames of UVG videos.\\n3. Since the paper's contribution is about improving the existing fine-tuning strategy that tackles model update quantization after fine-tuning (e.g., Zou et al., 2020), the proposed method should then also compare to these baselines to really assess its performance.\\n4. It would also be interesting to compare with approaches that optimize the encoded latents (e.g., Yang et al., 2020), which also achieve close to 1 PSNR improvement at equal bitrate without the overhead of decoder updates.\\n\\n------------------------\", \"questions\": \"1. Can the author comment on how \\\"the quantization bin width t and standard deviation \\u03c3 of p[\\\\bar \\u03b4]\\\" (Sec 4.3) are chosen? How sensitive is the compression performance to their choice, e.g., is it possible to discretize so finely that no amount of RD improvement can overcome the model update cost?\\n2. The use of the continuous density for the M (model update cost) term in Eq 2 is established in the Appendix A by showing that the gradient of the discrete cost \\\\bar M has the same gradient (up to first order) as that of -log p(\\u03b4) based on the density p(\\u03b4). Did I understand this correctly? But M = -log p(\\u03b4) doesn't actually give an estimate of the cost after discretization \\\\bar M = -log p[\\\\bar \\u03b4]. Instead, the typical thing to do in literature (due to Balle et al.) is to actually minimize -log p[\\\\bar \\u03b4], where \\\\bar \\u03b4 = round(\\u03b4), and the rounding can be either approximated by uniform noise injection or STE. Can the authors comment on this choice of their method?\\n\\n------------------------\\n\\nTypos and minor mistakes/fixes:\\n1. p. 2, under eq (1): The R-D loss is equivalent to the *negative* ELBO in VAEs;\\n2. Does Figure 3 bottom show the histogram of bit allocation for \\\\bar \\u03b4? If so then the caption can just say \\\"Bottom: histogram of bit allocation for \\\\bar \\u03b4\\\" as it's clearer.\\n\\n------------------------\", \"update_after_author_response\": \"I have increased my score in light of the substantial improvement to the manuscript and experiments.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Misleading results [updated]\", \"review\": \"Summary\\n-------------\\nThis paper extends neural compression approaches by fine-tuning the decoder on individual instances and including (an update to) the decoder in the bit-stream for each image/video. The proposed approach is evaluated on the UVG dataset and the authors find a 1db improvement (PSNR) relative to their own baseline.\\n\\n\\nQuality (5/10)\\n----------\\nThe proposed approach is sound and it would have been interesting to see the gains which can be achieved by fine-tuning the decoder of common neural compression approaches. Unfortunately, the few results provided in the paper are not just failing to answer this question but are misleading. By choosing a weak baseline, the reader is led to believe that fine-tuning a decoder will lead to large gains when realistic models are likely to benefit significantly less.\\n\\nThe authors motivate their simple baseline by noting that their approach is \\\"model-agnostic\\\". However, while the approach is model-agnostic, the results and conclusions are not. And it is mostly the empirical results which will be of interest to the reader. (A reader familiar with compression will be very well aware that a neural decoder _could_ be included in the bit-stream, making the conceptual contributions less interesting.)\\n\\nThe evaluated model encodes each frame of a 600 frame video sequence _independently_. A more realistic decoder would be conditioned on information in previously encoded frames, changing its behavior. It is reasonable to expect that similar change in behavior is encoded in the model updates. That is, the proposed approach is likely less effective in a more realistic setting.\\n\\nIf model complexity was a concern, the authors could have evaluated their approach on images instead of videos. The results would have looked less impressive but would have been more useful. Alternatively, they could have chosen a different video compression architecture of low complexity but one which is still practically relevant. E.g., one motivated by computational constraints.\\n\\n\\nSignificance (4/10)\\n----------------\\nNeural compression is of interest to many people in the the ICLR community and exploring the fine-tuning of decoders would be a useful contribution to this field. The significance of this contribution is only limited by the lack of a meaningful results.\\n\\n\\nOriginality (4/10)\\n--------------\\nIncluding model information in the bit-stream is an old idea in compression and not limited to neural compression. For example, Netflix is optimizing their classical video codecs at a \\\"shot\\\" level. Even JPEG (1992) allows us to fine-tune the Huffman table for an individual image (\\\"optimized JPEG\\\").\\n\\nIt is also common for compression challenges to require the model to be included in the bit-stream (e.g., the Hutter prize or the P-frame challenge of CLIC 2020).\\n\\nMany papers have been written on the related topic of _model compression_ (e.g., Han et al., 2016), which should at least be acknowledged. Compressed model updates are also used in parallelized implementations of SGD (e.g., Alistarh et al., 2017).\\n\\n\\nClarity (8/10)\\n---------\\nThe paper is well written and clear.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
U6Xpa5R-E1 | Neural Potts Model | [
"Tom Sercu",
"Robert Verkuil",
"Joshua Meier",
"Brandon Amos",
"Zeming Lin",
"Caroline Chen",
"Jason Liu",
"Yann LeCun",
"Alexander Rives"
] | We propose the Neural Potts Model objective as an amortized optimization problem. The objective enables training a single model with shared parameters to explicitly model energy landscapes across multiple protein families. Given a protein sequence as input, the model is trained to predict a pairwise coupling matrix for a Potts model energy function describing the local evolutionary landscape of the sequence. Couplings can be predicted for novel sequences. A controlled ablation experiment assessing unsupervised contact prediction on sets of related protein families finds a gain from amortization for low-depth multiple sequence alignments; the result is then confirmed on a database with broad coverage of protein sequences. | [
"proteins",
"potts model",
"unsupervised learning",
"amortized optimization",
"structure prediction"
] | Reject | https://openreview.net/pdf?id=U6Xpa5R-E1 | https://openreview.net/forum?id=U6Xpa5R-E1 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"GNfm11hKcGn",
"5R5b0pIDXSY",
"MkIFKNGsSU",
"HEnQk7dHAYr",
"7GbrtFxsjO",
"R0uqyDPyLbB",
"RaKBBHsAfI",
"2sUOV0hn9Gm",
"RR-bm_lhDG-",
"jGK-5l3MqJ0",
"9u-kxZKIe6k",
"o4uDVmiPlb-"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040378624,
1606274871283,
1605932489780,
1605932322260,
1605931952806,
1605931622518,
1605931502035,
1605931226294,
1603924803868,
1603907019586,
1603902484657,
1602723262942
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3657/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3657/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3657/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3657/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3657/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3657/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3657/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3657/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3657/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3657/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3657/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This is a creative piece of work wherein learning of what is normally family-specific Potts models is turned into an amortized optimization problem across different families of proteins. The Potts models are learned with a pseudolikelihood approach, and the evaluation of the model against baselines is performed only on a contact prediction problem. This last point is problematic, because on the one hand, the authors use this \\\"as a proxy for the underlying accuracy of the Potts model learned\\\", and on the other hand, claim that \\\"we do not want to claim our method is state-of-the-art for contact prediction, it is certainly not\\\". Overall, the paper is promising, but is too preliminary on the empirics to warrant publication at this time.\"}",
"{\"title\": \"Uploaded revision\", \"comment\": [\"We have now uploaded a revision, with thanks to all reviewers for help improving the manuscript. The main changes in the new version are:\", \"Rewrote introduction: discuss importance of low-depth MSA setting [suggested by R5, R4, R3] and remove framing wrt self-supervised learning.\", \"PFAM Experiments (Sec 4.1 + Appendix):\", \"Results on additional clans (NADP_Rossman, HTH, AB_hydrolase); On all 3 of the new clans, NPM matches or exceeds the Independent Potts baseline even on deep MSAs. [suggested by R4, R3]\", \"Add a \\u201cNearest Neighbor Potts model\\u201d baseline in all plots [suggested by R5]. This baseline is stronger than the independent Potts model for low-depth MSA; but NPM consistently outperforms the new baseline.\", \"Add trajectory plot, showing near-monotonic decrease of the amortized pseudo-likelihood loss, and increase of the top-L long range contact precision. [suggested by R2]\", \"Add architecture ablation experiments, showing direct mhbf prediction performance, and showing advantage to adding convolutional layers. [suggested by R5]\", \"UniRef50 Experiments (Sec 4.2):\", \"Added discussion of the purpose of the training and test set. [suggested by R5, R3]\", \"Added bootstrapped confidence intervals, and more metrics in Appendix. [suggested by R5, R2]\", \"Added related work section to frame wrt prior work on protein language modeling, unsupervised and supervised contact prediction. [suggested by R4]\", \"Improved discussion\"]}",
"{\"title\": \"General comments continued\", \"comment\": \"### (iii) The UniRef50 split\\n\\n\\nR5+R3 asked about the UniRef50 experimental methodology in Section 4.2 and Figure 4. We address the concerns below and will revise the paper to incorporate the feedback from reviewers around this.\\n\\nThis experiment asks whether an amortization gain can be achieved for sequences that are reasonably different from the model\\u2019s training data. We believe that using a clustering at 50% identity is reasonable, since this is a method for unsupervised learning from sequences - no structures are used in training the model, and no claims are made about generalization across structural families. We note the baseline independent Potts models have access to the same underlying database of sequences as the Neural Potts model - thus there is no form of data leakage that could put the baseline at a disadvantage.\\n\\nThis experiment does not answer any questions about whether generalization occurs across structural families. While we agree that the question of generalization across structural families (or superfamilies and folds) is certainly of biological interest, a deep investigation of this would be tangential to the main argument of the paper, which is to show that the amortization gain expected from the objective can be realized for sequences outside the training set.\\n\\nTo contextualize the significance of the results on the test set of UniRef50 MSAs, let us consider the setting where the amortized Neural Potts Model (i) matches the independent Potts model on training data: this means the NPM model can predict good quality couplings from a single feedforward pass without access to the full MSA at inference time; (ii) surpasses the independent model on training data: the amortization actually helps NPM to improve over independent Potts models, i.e. it realizes inductive generalization gain; (iii) matches the independent model on unseen sequences: indicates the model is able to synthesize a good Potts model for sequences not in its training data; (iv) surpasses the independent model on unseen data: the model actually improves over an independent Potts model even for sequences not in its training data. In combination these results indicate a non-trivial generalization happens when NPM is trained on UniRef50.\\n\\n### References\\n* [1] Jones et al. (2012) PSICOV: precise structural contact prediction using sparse inverse covariance estimation on large multiple sequence alignments\\n* [2] Kamisetty et al. (2013) Assessing the utility of coevolution-based residue-residue contact predictions in a sequence- and structure-rich era\\n* [3] Moult et al. (2016) Critical assessment of methods of protein structure prediction: Progress and new directions in round XI\\n* [4] Tetchner et al. (2014) Opportunities and limitations in applying coevolution-derived contacts to protein structure prediction\"}",
"{\"title\": \"General comments - response to all reviewers\", \"comment\": \"We thank the reviewers for their time and helpful feedback. We are pleased to see that all reviewers see value and novelty in our work. (R5: \\u201cI find the idea / approach to be very creative(..) I applaud the authors for taking a unique approach\\u201d, R2: \\u201cmodeling energy landscapes using NLP techniques is a timely and interesting problem\\u201d, R4: \\u201cidea is particularly compelling\\u201d, \\u201csound from a theoretical standpoint (...) [amortized optimization] framework is well motivated (...) \\u201c, R3: \\u201capproach is novel (...) a well-motivated idea\\u201d).\\n\\nIn this comment we address common points in the reviews around (i) need for a more extensive set of experiments, (ii) the relevance of learning Potts models for proteins with low depth MSAs; (iii) the methodology of the UniRef50 experiments.\", \"we_would_like_to_emphasize_what_we_see_as_the_main_contribution_of_our_work\": \"the introduction of an amortized optimization objective for learning Potts models, and demonstration that the amortization gain expected in theory can be realized. We note all reviewers agree our paper shows this - we believe this is a significant and exciting result convincingly demonstrated by the paper. We hope to foreground this contribution studying amortized optimization, and ask that the experimental evaluations be considered in the context of supporting this result.\\n\\nWe certainly don\\u2019t want to claim that the current results present a complete solution to protein structure prediction, rather we propose this approach as a new direction for unsupervised inference of structure from sequences. We further discuss the importance of the low MSA depth problem setting below (where our approach shows an advantage in experiments), and will improve the treatment of this in the revision.\", \"there_are_many_potential_avenues_to_improve_the_practical_utility_of_the_approach_which_could_be_explored_in_the_future\": \"e.g. new model architectures or higher capacity models, use of amortized couplings in a supervised pipeline, or combining independent Potts models with amortized couplings. Some of these have been pointed out by reviewers as possible additional experiments. Although it is beyond the scope of the paper to pursue these extensions, we hope the reviewers also see the existence of these and other possibilities for future development as part of the value of the approach.\\n\\n### (i) Additional experiments we are adding in the revision\\n\\nWe appreciate the suggestions from all reviewers on how to improve the experiments. We outline below an experimental plan for the revision incorporating these suggestions. See also the individual comments for more detail.\\n\\n* A baseline to Fig 3 (PFAM): Nearest Potts model in train set. Align the validation sequence to all train families from the clan, and use the Potts model from the closest match. (R5)\\n* An ablation of the architectures (convolutional layers, and multi-head bilinear form weight tied or untied) on PFAM. (R5)\\n* A plot of the Neural Potts Model objective values against contact precision over the course of training. (R2)\\n* Additional experiments on HTH and AB_hydrolase clans. (R4, R3).\\n\\n### (ii) Relevance of the low depth MSA setting\\n\\nThe setting of unsupervised structure learning for low depth MSAs is recognized as an important problem in the literature and is a known limitation of Potts model based methods. Potts models perform poorly when few related sequences are available in an MSA (e.g. Jones et al. 2012, Kamisetty et al. 2013, Moult et al. 2016). This trend is also observed in our experiments (Figures 3 and 4).\\n\\nThe relevance of the low depth setting is reviewed in Tetchner et al. 2014: \\u201cmany of the largest protein families typically have a structure available for template-based modeling [...] there is clearly more interest in applying covariation analysis to small- and intermediate-sized families.\\u201d\\n\\nAdditionally Tetchner et al. 2014 note: \\u201cProteins from higher organisms are a particular problem in terms of available data, as they suffer from there being far fewer sequenced genomes from different species than for bacterial proteins.\\u201d Also from Tetchner et al. 2014: \\u201cThe majority of successes in coevolution-based protein structure prediction have come from proteins that share common ancestry with bacterial proteins, and the comparative lack of available eukaryotic sequences limits the ability to apply covariation methods to families within higher organisms.\\u201d\\n\\nThere is further support for the problem setting in the dataset of MSAs we constructed for the UniRef sequences. Figure 6 in the Appendix, which studies the distribution of MSA depths across UniRef50, indicates that 19% of sequences in UniRef50 have MSAs with fewer than 10 sequences, (38% when a minimum query sequence coverage of 80% is specified).\\n\\nIn Figure 4 we find a clear advantage for the Neural Potts model when the MSA has fewer than 10 sequences. Our work shows a path toward improving unsupervised structure inference for low depth MSAs.\"}",
"{\"title\": \"Individual response to R3\", \"comment\": \"We thank the reviewer for their detailed feedback and interest in the approach.\\n\\n> the paper needs further experimental validation for acceptance\\n\\n\\nThank you for all the great suggestions. We will add additional experiments (detailed below) in response to your comments.\\n\\n> 1. Shallow MSAs\\n\\nThis is a problem setting known to be important in the literature. See e.g. the review by Tetchner et al. cited in the general comment (ii) for background on limitations of existing methods and importance of proteins with low MSA depth. Additionally we reference Figure 6 for the prevalence of low-depth MSAs (19-38% of sequences in UniRef50 with MSA depth <10). We find that independent Potts models perform poorly in this regime (as is already known), and that the NPM shows an advantage, confirming the core theoretical contribution (inductive generalization gain) can be realized in this setting.\\n\\n> 2. UniRef50: information leakage\\n\\nPlease refer to the general comment (iii). Please note that the model is trained only on sequences, without structure supervision. Because it is a purely unsupervised method there is no labeled data that could leak between the train and test splits. Also note that the independent baselines (at test time) and neural model (at train time) have access to the same underlying sequence database so they are on equal footing when they are evaluated. We don\\u2019t claim to measure generalization between structural families; the goal of the experiment is simply to show the model can synthesize reasonable couplings for sequences outside its training set.\\n\\n> PFAM vs UniRef50:\\n\\nThis is a great observation. Indeed PFAM is a much simpler task: the model (despite being smaller than the UniRef50 model) has enough capacity to well amortize all Potts models. In contrast UniRef50 spans a much wider diversity of families, so even the larger model does not have enough capacity to perfectly fit the UniRef families. More parameters and better architectures can likely improve this.\\n\\n> More than one pfam clan\\n\\nThanks for this suggestion, we will add additional experiments on HTH and AB_hydrolase clans.\\n\\n> UniRef50 Split \\n\\nPlease refer to the general comment (iii), and comment above.\\n\\n> only evaluating the top L/5 contact prediction task, try other thresholds\\n\\nIn the revision we will provide both the top-L and L/5 metrics, as well as an AUC computed over different thresholds in the Appendix.\\n\\n> Potts Model parameters to be fed into other structure prediction models\\n\\nThis is a really interesting suggestion and we agree that the integration of the Neural Potts Model in supervised pipelines is a great opportunity for future work. Our preliminary experiments here were inconclusive.\\n\\n> References for important proteins with shallow MSAs.\\n\\nSee general comment (ii).\\n\\n> How do you compute the MSA?\\n\\nFor all sequences, we construct MSAs using HHblits (Steinegger et al., 2019) against the UniClust30_2017_10 database. HHblits was performed for 3 iterations, with an e-value of 0.001. (See Appendix C.4)\\n\\n> PFAM: how do you vary the MSA Depth?\\n\\nThanks for the question, we will add this detail in Appendix C.3. We keep the top-M sequences in the MSA, after applying HHfilter to filter too redundant sequences and increase diversity (See Appendix C.3). The MSA sequences are ordered by e-value, so we keep the most maching sequences.\"}",
"{\"title\": \"Individual response to R2\", \"comment\": \"We thank the reviewer for the feedback.\\n> Is the NPM objective derived from some probabilistic model?\\n\\nIn brief, no, we derived the Neural Potts Model as amortized optimization (Section 2.1) of a large collection of Potts models optimized with pseudo-likelihood maximization, see Section 2.\\n\\n\\n> plot the value of the proposed NPM objective in the Transformers as a function of the Top-L precision\\n\\nThis is a great suggestion. We are adding a plot of the NPM loss value against contact top-L precision, computed on the reduced-MSA bucket during PFAM training. It shows a monotonously decreasing NPM loss and increasing contact precision over the course of NPM training. This is in line with the common usage of the (non-amortized) independent Potts model pseudo-likelihood maximization as proxy loss for downstream contact prediction [1,2].\\n> punctuation and labeling of figures\\n\\nThanks for the detailed read and suggestions. Figure 2 shows a 1D cartoon of the loss landscapes (train and generalization) in function of the Potts model parameters. Figure 3: \\u201c(we show) averages and standard deviations across (cross-evaluation) rounds\\u201d.\\n\\n* [1] Balakrishnan et al. (2011). Learning generative models for protein fold families.\\n* [2] Ekeberg et al. (2013). Improved contact prediction in proteins: Using pseudolikelihoods to infer Potts models.\"}",
"{\"title\": \"Individual response to R5\", \"comment\": \"We thank the reviewer for the detailed feedback.\\n> Summary \\n\\nWe will clear up the framing wrt self-supervised learning. We are glad to see the reviewer appreciates the novelty of the core approach.\\nRegarding the utility and significance of the results, please refer to the general comment (ii). In summary: we reference Figure 6 for the prevalence of low-depth MSAs (19-38% of sequences in UniRef50 with <10 sequences in their MSA), highlighting that independent Potts models perform poorly in this regime, and that the core theoretical contribution (inductive generalization gain) is experimentally confirmed. We outline avenues to improve the practical utility e.g. model improvements and scaling, integration in supervised pipelines, and combining NPM with independent Potts models.\\n\\n**PFAM**\\n\\n> top-L vs L/5\\n\\n\\nThanks for calling our attention to this - it was an oversight. In the revision we will provide both metrics as well as an AUC computed over different thresholds in the Appendix.\\n\\n> new baseline experiment:\\n\\nThank you for a great suggestion. We are working to implement and evaluate this baseline now.\\n\\n**UniRef experiments**\\n\\n\\n> purpose of heldout set:\\n\\nThank you for pointing this out. We will revise the paper to better explain this experiment. This is also discussed in the general comment (iii). To respond directly here -- this experiment is looking to see whether an amortization gain can be realized for test sequences that are reasonably different from the train set. The experiment is not trying to get at any biological notion of underlying structural homology. We agree that it could also be interesting to look at generalization across different levels of structural homology; however this is tangential to the focus of the paper, and beyond its scope since it would require pre-training the model with very different datasets constructed to isolate the underlying biological factors of interest.\\n\\n> short-range\\n\\nWe will add these in the appendix; we wanted to make space for the scatter plot. The trend tracks.\\n\\n\\n>unclear statement: subsample to M=30\", \"rephrased_to\": \"\\u201cDuring training in each epoch we randomly subsample a different set of 30 sequences from the MSA\\u201d. Added M to the 2nd summation of Eq 8.\\nThis refers only to NPM training. One can think of this as similar to a minibatch: for every gradient update a batch of N random sequences is sampled as input; and N x [M random target sequences] are sampled. Over the course of training, for enough epochs through the dataset, every sequence in the MSA will be sampled.\\n\\n**Further comments**\\n\\n> existing methods that currently exists for shallow MSAs. For example, standard semi-supervised methods (...)\\n\\nWe do not want to claim our method is state-of-the-art for contact prediction, it is certainly not. In the paper we use contact precision as a proxy for the underlying accuracy of the Potts model learned by either the amortized objective or the independent model. Our goal is to show an improvement for couplings learned through amortized optimization. Therefore the appropriate comparison is to the independent Potts model. The contact precision simply serves as a way to measure the quality of the model.\\nWe agree that it would be valuable to include a discussion of other approaches that are relevant for the shallow MSAs and will do this.\\n\\n\\n> Model architecture ablation\\n\\nThanks for the suggestion, we are running the architecture ablation experiments on PFAM now and will add these to the paper. We found in preliminary experiments that those architectural improvements (untying, convolutional layers) helped, but did not have time to implement them in the large-scale UniRef runs. We have started new UniRef runs with the improved architecture and expect improved results to be ready to add in the camera-ready version but not by the revision deadline.\\n\\n\\n> please clarify: \\u201cwe randomly subsample the MSA down to 100 sequences as NPM target sequences\\u201d\\n\\nThanks for pointing this out, this sentence is out of place. It refers to the same M as in the previous comment; in this case we randomly subsample a different set of 100 sequences from the MSA on each epoch during NPM training.\"}",
"{\"title\": \"Individual response to R4\", \"comment\": [\"We thank the reviewer for the detailed feedback. Let us provide answers on the questions in bullet point form:\", \"*Motivation for the shallow-MSA regime*:\", \"See the general comment (ii). We will indeed add some discussion in introduction along the lines discussed there.\", \"*Paragraph about protein language models*:\", \"We agree with the reviewer and are adding some discussion in the main text.\", \"*Vanilla Transformer*:\", \"We will clarify. We are using a bidirectional transformer as in BERT. We use a BERT/MLM-pretrained transformer (following Rives et al. 2019) to initialize the NPM.\", \"*Gap on deeper MSAs*: we believe that the gap on deeper MSAs is mainly due to the limited capacity/parameter count of the single model which has to capture a large variety of Potts models. For comparison, a single independent Potts model for a sequence of length 500 has approx 110M parameters (500 squared by 21 squared), though heavily overparametrized and regularized.\", \"*Detect when NPM will underperform independent Potts model*: This is an interesting question for future work. The most obvious candidate would be the primary train objective (pseudo-likelihood loss on the MSA); however this quantity is heavily dependent on regularization (which varies between the independent Potts model and NPM), and wasn\\u2019t useful in our preliminary experiments.\", \"*How expensive is it to train one NPM Vs 1000 Potts models*:\", \"This is a good question - computational cost will depend on the architectural details (depth and width mostly) of NPM. Usually NPM will converge in fewer iterations through the data. However, there is a clear savings in *disk space* to actually store the parameters of one NPM versus 10k Potts models (typical for supervised training); since each independent Potts model is on the order of 1-100M parameters, storing 10k Potts models consumes several TB of disk space (vs a few GB for NPM).\", \"*Distribution of Meff*: (see also general comment (ii) ). The distribution of msa depth (M, upper bounds Meff) in UniRef50 is in Appendix, Figure 6, and shows that 19-38% of the UniRef50 sequences have <10 sequences in their MSA, and 30-55% of the UniRef50 sequences have <100 sequences in their MSA.\", \"*Upweighting with sqrt(Meff)*: this reweighting causes the optimization to pay more attention to well-formed, deep MSAs and discount the shallower MSAs, which we found to be helpful. It can be seen as a middle ground between the vanilla formulation with (1/Meff), where each MSA contributes equally independent of its effective depth, and dropping the (1/Meff) completely which would make each MSA contribute proportional to its effective depth.\", \"*performance gap on HTH*:\", \"Thanks for the suggestion. We will add additional experiments on HTH and AB_hydrolase clans.\"]}",
"{\"title\": \"NPM is a promising idea but the paper needs more work\", \"review\": \"###########################################################################################################\\n\\n\\nSummary of Paper\\nThe motivation for the paper is a bit unclear. In the introduction, the authors begin by claiming to extend self-supervision \\\"to information from a set of evolutionarily related sequences\\\". However, it does not appear that the model is at all used for pretraining / representation learning as would be expected of a self-supervision method. No further connections to self-supervision are made in the rest of the paper. Based on the conclusion it seems that self-supervision is a future direction. If so, I would suggest rewriting the introduction to more clearly state the motivation of this paper.\\n\\nThe rest of the paper is more clear. The authors tackle the problem of predicting protein contacts. Standard unsupervised approaches fit a Potts model (often with pseudolikelihood) and then use the parameters of the fit model to make predictions about contacts. Such approaches require a MSA of reasonable depth and the amortized optimization approach suggested by the authors has the potential to work better than standard unsupervised approaches in the small-MSA regime by sharing information from all the protein families that the model is fit on.\\n\\nThe authors propose a meta-learning approach wherein a neural network (here a transformer) takes as input a single sequence and outputs the parameters of a Potts model. The model is trained with a pseudolikelihood loss across all sequences within individual families. This is a new and clever idea.\\n\\nThe main experimental result is that the NPM outperforms standard Potts models for shallow MSAs. This demonstrates that there apears to be some utility to this approach. However, the precision is still very poor and the authors make no claim that the enhanced accuracy is of biological utility (i.e. can be used to fold the protein). There are other tools that perform contact prediction from a single sequence with higher accuracy than the NPM. Furthermore NPM performs worse than standard Potts models for medium and large MSAs. This seems to be a significant drawback of the method. \\n\\n#############################################################################################\\n\\n \\nQuestions / Suggested Experiments:\", \"pfam_experiment\": \"Can the authors please explain why the PFAM experiments are evaluated on top-L precision whereas the remaining experiments are top-L/5 precision? The lack of explanation in the paper suggests the metrics may be cherry-picked to best show the performance of the model. \\n\\nSince all the proteins in the family share structural similarity, it is not clear that NPM has learned to transfer any information. After looking at the structures in PyMOL it seems that significant substructure is shared between the different families. Here is a resonable baseline/experiment to add: Align the query sequence to the higher-depth MSAs and then using the MSA that best fits the query sequence, predict contacts based on the individual Potts model trained on that family. This would elucidate whether or not NPM is simply treating the query sequence as if it were from one of the higher-depth MSAs.\", \"uniref_experiment\": \"The details of this experiment could use more explanation. I did not see any explanation of the purpose of the training and heldout sets, which in this setting are not obvious. My guess is that the authors are trying to demonstrate that the NPM can generalize zero-shot to new families? If so, please mention it in the text. Random splits are problematic for demonstrating zero-shot generalization since there could be very similar families in the train and heldout sets (e.g. same PFAM clan or structural class). As written, the paper barely discusses the purpose of the heldout set but if it is meant to convey some test of generalization the authors should more carefully construct their heldout test set accordingly.\\n\\nWhy is there no plot for short-range predictions, as there was for the PFAM experiment?\", \"there_is_an_unclear_statement\": \"\\\"During training, we iterate over all sequences and their MSAs on every epoch, and subsample to M=30 sequences per MSA.\\\" Is this just for NPM or also for the Potts model? I would be very concerned if this is also what is done for the Potts models as it would significantly harm their performance. The authors need to clarify this in the text of the paper.\\n\\nThe authors do not discuss other existing methods that currently exists for shallow MSAs. For example, standard semi-supervised methods are fine-tuned on contact prediction and thus, work with single-sequence inputs. I believe these methods may perform as well or better than NPM. The authors should discuss these methods and include comparisons. \\n\\n\\nModel Architecture\", \"please_do_the_following\": \"1) In Appendix B.1 you describe a number of \\\"tricks\\\" to reduce the number of parameters. These include (1) a low-rank decomposition of the bilinear form, (2) weight tying by amino acid for the decompositions (3) Convolutional layers. Please provide experiments showing the utility (or lack thereof) of each of these. This will lift the results from \\\"here is what happens if you use these tricks\\\" to \\\"here is data suggesting that these choices we made are better than the naive choices\\\". Thus, add more impact to the paper. \\n\\n2) No explanation is given for why the architectures are so different for PFAM and UniRef (e.g. convolutional layers for PFAM only). Please provide an explanation. A priori, I see no reason to use different architectures for these two different domains. It seems to only complicate the paper and methods. If these choices really are important, then this seems to suggest overfitting to particular tasks rather than having a general solution.\", \"appendix_training_details\": \"I personally found the written descriptions here to be too vague to understand the exact details. This made it harder to evaluate the paper. Can you please clarify:\\n1) \\\"we randomly subsample the MSA down to 100 sequences as NPM target sequences...\\\". Based on equation (7) this means the pseudo likelihood is evaluate on 100 random sequences from the MSA. This seems to directly contradict the experiment shown in Figure 3 where the number of sequences in the MSA is varied up to 1000. Please explain what is happening here?\\n\\n\\n###########################################################################################\\n\\nExplanation of Score\\n\\nThis paper was difficult to score because I find the idea / approach to be very creative and different from standard techniques of borrowing the latest NLP pertaining task and applying it to proteins. I applaud the authors for taking a unique approach. \\n\\nThe main drawback is that the authors exclusively evaluate the model on contact prediction and do not demonstrate convincing performance. While the model sometimes outperforms standard approaches with shallow MSAs the performance is still quite bad and no comparisons to other methods are shown. \\n\\nThus, overall I am impressed with the direction of the paper but think it needs more work. \\n\\n\\nUpdated score from 5 -> 6 after clarifying feedback from the authors.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting problem, however, more validations required to strengthen the main claim.\", \"review\": \"The paper proposes a new object called Neural Potts Model (NPM) to train a Transformer to learn the local energy landscape of protein sequences. The problem of modeling energy landscapes using the power of techniques in natural language processing (NLP) is a timely and interesting problem. However, there are some concerns that limit the strength and the main claim of the paper that needs to be addressed.\\n\\nIs the NPM objective in equation (7) derived from some probabilistic model? While the intuition behind the objective can sound plausible from the discussion that follows, I am still curious what problem formulation is this objective solving? Is it a proper likelihood term?\\n\\nSince the paper is advocating the use of NPM objectives, is it possible to plot the value of the proposed NPM objective in the Transformers as a function of the Top-L precision? This will strengthen the claim in proving all the gains are actually coming from the objective and not other possible factors in training the model.\\n\\nThe paper can be improved by more effort in punctuation and proper labeling of figures. There are many sentences and phrases that need a comma for better readability. There are extra parentheses when referencing Figures. What are the axes showing in Figure 2? What are the highlighted shades indicating Figure 3 (std or sem)? Is it possible to have error bars for Figure 4? In abstract, MSA is not defined.\\n\\nAlso given that there is sufficient space left in the paper, some material including the Algorithm box can be moved to the main text. Also, the author can consider to use the space to expand on the significance and importance of the NPM objective.\\n\\n** after rebuttal: thanks for addressing the comments. I have revised my score based on the discussion.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An amortized optimization framework to learn a mapping between protein sequence and the parameters of a Potts model describing the local energy landscape of the input sequence. Approach is well motivated and addresses limitations of existing approaches when few sequences are available in the neighborhood of the target sequence. Paper is well written and easy to read. Experimental results demonstrate that the approach works well in the regime for which it was introduced (low density regime)\", \"review\": [\"Overall comments\", \"The paper introduces an amortized optimization framework to learn a mapping between an input sequence and the coefficients of the corresponding Potts model -- a standard method used in computational biology to model protein sequences.\", \"The idea is particularly compelling in the regime where there exists a limited number of sequences (in the training data) that are similar to the target sequence. That would typically result in a low quality multiple sequence alignment (MSA), and in turn a poor Potts model.\", \"The idea is sound from a theoretical standpoint. The newly introduced framework is well motivated: authors did a good job at explaining the background for the problem and providing an intuition for why the proposed amortized optimization framework would improve over the current approach (i.e. independently training models on each MSA).\", \"The paper is very well written and easy to read -- the language is precise, everything is defined very clearly.\", \"Experiments show good results in the regime that matters most for the introduced Neural Potts model (NPM), i.e., low density of sequences around the query.\", \"The approach would be even more compelling if the NPM would not underperform the baseline in the regime where Meff (the \\u201ceffective number\\u201d of sequences in the MSA) is larger.\", \"Bridging that gap would be very compelling from a practical standpoint: large scale studies typically involve modeling thousands of distinct proteins, and thus fitting thousands of distinct models independently. Intuitively, there is shared information between these models, which is lost when models are trained independently but could be captured via a process akin to the one introduced in this paper.\", \"Detailed comments and questions\", \"Section 1\", \"Would suggest to add one sentence describing why it matters to solve the low Meff regime to further motivate your approach (e.g., which scientific questions would we be able to better answer?).\", \"Section 2\", \"I would add a paragraph on Transformers models for protein sequence embedding (since it is a core component of the NPM used in experiments), and cite a few of the key works in this area, for example:\", \"Rives et al., Biological Structure and Function Emerge from Scaling Unsupervised Learning to 250 Million Protein Sequences (https://www.biorxiv.org/content/10.1101/622803v1)\", \"Madani et al., ProGen: Language Modeling for Protein Generation (https://www.biorxiv.org/content/10.1101/2020.03.07.982272v2)\", \"Vig et al., Bertology meets biology: Interpreting attention in protein language models (https://arxiv.org/abs/2006.15222)\", \"Rao et al., Evaluating Protein Transfer Learning with TAPE (https://arxiv.org/abs/1906.08230)\", \"Section 3\", \"Is there a particular reason for which you are using a vanilla Transformer architecture? Other Transformer architectures (e.g., BERT-like architecture) may be able to learn better sequence embeddings and in turn further close the gap with the independent Potts model in the large Meff regimes (see the above papers for example).\", \"Section 4\", \"How do you interpret the gap with the baseline Potts model approach in the medium-large Meff setting?\", \"Is there a way to detect (based on training) the situations where the NPM will likely underperform the independent Potts model baseline?\", \"Computationally, how expensive is it to train one NPM Vs one Potts model? For example, if I have 1000 proteins to model in the regime where both perform the same (Meff ~100), is it faster for me to train one NPM model or 1000 Potts models? I suspect the 1000 Potts models are still cheaper to train, but if not that could be one strength of your approach to mention.\", \"What is the distribution of Meff over the data (section 4.2)? Based on the rightmost plots it seems that the majority of MSAs are in the \\u201c>500\\u201d regimes where NPM underperforms independent Potts models (asking that question with the understanding that you are more interested by addressing the low Meff setting here).\", \"Do you have an intuition for why the NPM appears to be doing better for medium range interaction here Vs long range in experiment 4.1?\", \"How do you explain the U-shape of the top-L/5 precision curves for NPM (on heldout data)? I would have expected that increasingly larger Meff would be associated with monotonically increasing performance (as is the case for the Independent Potts model curves in blue).\", \"Appendix C1\", \"Why the upweight by a factor sqrt(Meff(n))?\", \"Appendix C3\", \"Did you look at the performance gap on HTH? Since baseline Potts models yield poor long-range contact on that dataset, perhaps NPM would have helped address that issue, given increased ability to model long range dependencies as per 4.1. It was not clear whether the low performance is due to an intrinsic limitation of the Potts model or potentially due to a low density around each query point in HTH (in which case NPM may help).\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea. Limited experimental evidence.\", \"review\": \"This paper aims to improve low-depth MSAs, when a protein of interest only has a small number of known evolutionarily related sequences. This is a well motivated problem. MSAs are commonly used for a variety of purposes. Methods to enhance low-depth MSAs can be very useful. In particular, this paper focuses on using MSAs for contact prediction as a down-stream task. I'm not sure if contact prediction is the best use case for this, but it's a well-studied task for proof of concept.\\n\\nThe weight-sharing Neural Potts Model approach is novel (as far as I know) and is a well-motivated idea. There could be reasons to believe that weight sharing could improve Potts Model compared to directly fitting Potts Model to individual MSAs. Potts Model is very commonly used and it makes sense to base their method on Potts Model.\\n\\nPersonally, I think this approach is interesting, but the paper needs further experimental validation for acceptance. There are two main limitations in the experiments:\\n\\n 1. The paper claims that there is a gain from amortization for low-depth MSAs. This claim is only partially supported for very shallow MSAs. According to the experiments in the paper, there is only a gain for very shallow MSAs (fewer than 15 sequences) for long-range contact prediction, or for MSAs with fewer than 50 sequences for medium-range contact prediction. I'm not sure whether it's a common situation to have only 15 sequences in an MSA. It would be helpful if the authors could provide example use cases where this applies.\\n\\n 2. The UniRef50 clusters are based on 50% sequence identity. In the paper, the train-test split is based on the clusters. There could still be significant information leakage if some train and test sequences share 40% similarity. I would like to see the authors further examine this and show that this is not the case.\", \"suggestions_for_improving_the_paper\": \"1. Include more than one PFAM clan level study. The P-loop NTPase example is well taken. It seems like the results here are quite different from the UniRef50 results (NPM has advantage for up to MSA depth 100s vs 15). This could be explained by the families in the same clan are a lot more similar than in UniRef50 in general. It would be helpful to understand why the NPM approach worked particularly well on NTPase, and whether it works similarly well on other PFAM clans. It seems like instead of aiming to use the method on arbitrary sequences, the method could be more valuable in cases where we have MSAs for other related families in the same clan.\\n\\n 2. Split train-test differently in UniRef50 to make sure similar families don't end up in both train and test.\\n\\n 3. Instead of only evaluating the top L/5 contact prediction task, try other thresholds. The achieved contact prediction precision is around 10-20%. That seems rather low for both NPM and direct Potts, so hard to say if the gain is meaningful.\\n\\n 4. (Perhaps out of scope for this paper.) Could this method be applied to generate Potts Model parameters to be fed into other structure prediction models (e.g. AlphaFold). For example, it would be interesting to see that the NPM Potts Model can improve AlphaFold contact prediction results. \\n\\n 5. References for important proteins with shallow MSAs.\", \"clarification_questions\": \"1. How do you compute the MSA? Is it from HHblits or some other standard package?\\n2. In the PFAM experiment, how do you vary the MSA Depth? Is it random subsampling?\\n\\n--------------------------------------------------------------------------------------------------------------\", \"update\": \"Updated score from 5 to 6. The author response clarified some key questions and the updated paper incorporated some of the feedback.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
qiAxL3Xqx1o | GG-GAN: A Geometric Graph Generative Adversarial Network | [
"Igor Krawczuk",
"Pedro Abranches",
"Andreas Loukas",
"Volkan Cevher"
] | We study the fundamental problem of graph generation. Specifically, we treat graph generation from a geometric perspective by associating each node with a position in space and then connecting the edges based on a similarity function. We then provide new solutions to the key challenges that prevent the widespread application of this classical geometric interpretation: (1) modeling complex relations, (2) modeling isomorphic graphs consistently, and (3) fully exploiting the latent distribution.
Our main contribution is dubbed as the geometric graph (GG) generative adversarial network (GAN), which is a Wasserstein GAN that addresses the above challenges. GG-GAN is permutation equivariant and easily scales to generate graphs of tens of thousands of nodes. GG-GAN also strikes a good trade-off between novelty and modeling the distribution statistics, being competitive or surpassing the state-of-the-art methods that are either slower or that are non-equivariant, or that exploit problem-specific knowledge. | [
"GAN",
"generative adversarial network",
"WGAN",
"GNN",
"graph neural network",
"generative model",
"graph"
] | Reject | https://openreview.net/pdf?id=qiAxL3Xqx1o | https://openreview.net/forum?id=qiAxL3Xqx1o | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"VGD_pDzC_m",
"rA9tt4SS3K",
"lnCIq70Fc8D",
"CZ79gmTaFSm",
"n4TYA6grhT",
"lRgHen5b29I"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040474702,
1604775157327,
1604006237662,
1603976225380,
1603901757536,
1603889501703
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3656/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3656/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3656/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3656/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3656/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"In this paper, the authors proposed a geometric graph generator that applies a WGAN model for efficient geometric interpretation. All the reviewers agree that the idea is interesting and the method has the potentials for graph generation tasks. Unfortunately, the experimental part is unsatisfying, which makes the paper on the borderline. More analytic experiments should be designed to verify the properties of the proposed GG-GAN, especially its scalability. Although in the rebuttal phase the authors add a simple example to generate large but simple graphs, we would like to see more experiments and comparisons on more real-world large graphs (even if the performance may not be good, the results will be constructive for both readers and authors to understand the work).\"}",
"{\"title\": \"The paper is interesting and well-written but lacks good numerical evidence section\", \"review\": \"The work proposes to use WGAN architecture to learn latent space for generating new graphs with similar properties to the original ones. The authors show that their model is capable to control a probability of each new generated graph. Moreover it\\u2019s equivariant function which ensures that isomorphic graphs have the same probability to be generated. These properties are desirable if we want to generate efficiently new graphs with properties similar to the graphs in the training set.\\n\\nThe paper is interesting and well-written, designs important properties for the generator of graphs. Usage of GAN in graph generator has been explored in the previous approaches and showed good results for generating graphs with similar properties to a given one [1]. \\n\\nMy main concern is in numerical evidence section. \\n\\n\\u2022 Datasets. There are essentially 2 real-world datasets with 9 vertices graphs and one artificial dataset with 20 vertices. If a graph has only 9 vertices, there are 12346 non-isomorphic graphs [2]. Your datasets is composed of around 10K graphs, therefore they should have a large portion of isomorphic graphs in train and test. Such repetitions may negatively affect performance of your model as well as baselines [3]. Also, since the graphs are small and consequently the number of non-isomorphic graphs is small, it\\u2019s possible to generate all of them quite easily for graph discovery \\u2013 the main motivation of this paper. It would be more convincing to have a comparison on medium and big graphs. \\n\\n\\u2022 Baselines. Table 1 is not convincing in showing that the proposed method is better than preious approaches. There is a single result (out of 15) where MMD is better than other methods. graphRNN achieves 10 best or second best performances (vs 7 of GG-GAN). Additionally, I would be curious to see other graph generators such as NetGAN [1] (which should scale well for these sizes of graphs) and even simpler baselines such as finding parameters from train set of simple network models (Watts-Strogatz, Barab\\u00e1si\\u2013Albert model, Chung-Lu, etc.) and then generating random graphs from these models. \\n\\n[1] NetGAN: Generating Graphs via Random Walks\\n\\n[2] http://oeis.org/A000088\\n\\n[3] Understanding Isomorphism Bias in Graph Data Sets\", \"https\": \"//arxiv.org/abs/1910.12091\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting problem and a neat idea, but some essential parts are missing.\", \"review\": \"In general, this paper deals with an interesting and essential problem to generate geometric graphs under several standards. The whole algorithm seems easy to implement or reproduce. It seems with minor modifications to traditional autoregressor based generative graph models, the proposed framework can effectively model isomorphism as well as delivers certain novelty. The idea of the paper is with novelty and some theorems can support the observations.\\n\\nHowever, I still have several concerns about the paper, stated as follows:\\n\\n1) While the paper emphasized that the proposed GG-GAN is capable of producing geometric graphs with under several vital criteria (e.g. novelty, scalability and modeling complex dependencies), one critical factor is missing: mode collapse/generation diversity. Many of the generative models still suffer from mode collapse/generation diversity problem, resulting in a small portion of generated variants than empirical observations. I would recommend the authors to discuss and give more evidence showing the ability of the proposed method to avoid such pitfall.\\n\\n2) The authors claimed that the proposed method can model complex local and global dependencies among nodes and edges. I understand that such a procedure can handle node dependencies given the design of the generator part. However, I suspect the ability of the proposed method to model complex \\\"edge dependencies\\\". To me, the sampling procedure of edges is performed under independent Bernoulli distribution for each edge. Therefore, it's inappropriate the claim that GG-GAN incorporates any mechanism to model dependencies between edges.\\n\\n3) In corollary 1, the authors proved the existence of some distribution \\\\mathcal{D}_x. However, it seems that existence of such distribution is naive: we can simply establish distbution with Delta-functions for each node. If I understand right, this corollary is not very informative.\\n\\n4) Proposition is under the condition that each node is sampled independently. However, such a sampling mechanism can be easily replaced with a sampling procedure following a point process to avoid the coincide of nodes. I suppose that it would be better to briefly discuss the sampling procedure under this setting. Otherwise, it would be too weak Proposition 1 is.\\n\\n5) Section 2.2.3 Avoiding Collision is obscure to me. The necessity of such a mechanism is not well understood for me. I suggest the authors to give more details or clarify with some theoretical analysis to justify their claim in this section.\\n\\n6) Though the authors gave some discussion on the hand-crafted features of nodes, I still think the features employed in GG-GAN is ad-hoc. Hand-crafted features can greatly hinder the capacity of deep learning model and thus weaken the contribution of the paper.\\n\\n7) I see that to construct the initial input to the generator, an initial fixed point configuration and a unique sampled z for each node are concatenated. I suggest the authors to show what will happen if we sample z separately for each node. This may help to understand the necessity of unique sampling, then further show the mechanism behind it.\\n\\nI might consider to raise the rating if the authors can address my concerns well.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"GG-GAN: A GEOMETRIC GRAPH GENERATIVE ADVERSARIAL NETWORK\", \"review\": \"The main contribution of this paper is dubbed as the geometric graph (GG) generative adversarial network (GAN), which is a Wasserstein GAN that addresses the challenges.\\nThe proposed method is inspiring and has sufficient theoretical support. This paper is globally well organized and clearly written.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"The paper claimed multiple fascinating properties of GG-GAN, but lacks some supporting analysis and experimental demonstration.\", \"review\": \"The paper proposes GG-GAN, a GAN-based graph generative model for mimicking the structure distribution of realistic networks. The authors claimed multiple fascinating properties of GG-GAN, including (1) isomorphism consistency, (2) expressive power, (3) scalability, (4) novelty. Experimental results on three datasets show the performance of GG-GAN in terms of five network properties. In general, the paper is well written and easy-to-follow. And a bunch of add-on theorems is provided to support the rationality of the proposed method. My major concerns are that the paper might over-claimed their contributions and the experimental results show limited improvement over the baseline methods. Here are the detailed comments:\\n\\n[Novelty]: The studied problem has been (partially) studied in the existing literature (e.g., isomorphism in graphRNN, expressive power in NetGAN, Scalability in TagGen, etc. ). It is unclear to me what the key contributions of this paper on top of existing work. \\n\\n[Literature review] The authors fail to provide a comprehensive literature review in the context of graph generation. The authors might want to provide an individual section to fully discuss the connection between this paper to the previous work. For example, \\n* in terms of isomorphism, what is the difference between GG-GAN and the way employed in graphRNN? Where are the key innovations?\\n* in terms of scalability, the authors claimed that \\\"GG-GAN is significantly faster than autoregressive models\\\". Have you compared with the recent proposed Transformer-based graph generative model (e.g., TagGen)?\\n* ...\\n\\n[Theoretical analysis] The paper includes a bunch of add-on theorems. But, it seems to me that the connection between them and the proposed GG-GAN is weak. \\n\\n[Experiments - network quality] The paper shows limited improved over the baseline methods in terms of the quality of the generated graph. Interestingly, the paper fails to compare with NetGAN, which achieves SoTA performance in a list of network properties in the real-world datasets in my practice. \\n\\n[Experiments - scalability] As scalability is one of the major claims of GG-GAN - \\\"GG-GAN is significantly faster than the SoTA autoregressive models\\\", the authors should give a thorough comparison with the existing RNN-based graph generative model (e.g., NetGAN)/Transformer-based graph generative model (e.g., TagGen). Moreover, the authors may want to provide some insightful discussion on why the proposed GAN-based is definitely faster than the autoregressive model. To my best of knowledge, the autoregressive models can be mostly scaled with O(kn), while the GAN-based graph generators are required to fully specify the adjacency matrix A with an O(n^2) parameter space. Please correct me if I am wrong here. \\n\\n\\nOverall, I enjoyed reading this paper. But, without clearing my concerns stated above, I will vote for weak rejection.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A good paper, some parts may need to be clarified.\", \"review\": \"The paper investigates graph generation using adversarial technics. They introduce an algorithm named GG-GAN, based on Wassertain GAN, in order to accurately generates new graphs in hopefully the same distribution as a given dataset. GG-GAN generates points in an euclidian space that is then turned into a graph using a similarity function on the space. This approach is justified by Theorem 1. The authors show that their method successfully generate graphs within the same scope as the input dataset, and show that GG-GAN generates much more new graphs that current state of the art approach.\", \"pros\": [\"The paper is well written, honnest and didactic.\", \"The proposed method is new and is experimentaly efficient.\"], \"cons\": [\"The experiments could be more convincing: small dimension (<=20), classical graph benchmark.\", \"The introduction of $\\\\phi$ is confusing, and its implications not well justified.\"], \"remarks\": [\"Proposition 1 should be clarified with assumption within the theorem statement. The \\\"probability of being drawn by probability $\\\\mathcal{D}$\\\" is not well defined. This should be made very rigourous.\", \"Section 2.1: 'complete representation' is not defined. Wouldn't k>=n-1 be enough?\", \"Section 3.1: it is not totally clear to the reader why the set $\\\\phi$ is important in GG-GAN. i do not get how it works. How do you learn such parameters? How does it avoid collisions?\", \"Section 3.3: some recent published papers actually give simple tricks for MPNNs in order to achieve universality without going through higher order tensors [1, 2]. It would be interesting to investigate the relationship of these approachs with the concatenation trick in GG-GAN.\", \"I am not fully convinced the ethical consideration and impact paragraph is needed here.\"], \"questions\": [\"Experiments: what are the number of non-isomoprhic classes in the considered datasets?\", \"You finally sample through a Bernouilli hence non directly differentiable. Why not using the same trick with other methods instead of going for GANs?\", \"How does $\\\\phi$ looks like after training? What happens if we do not learn it and use random vectors instead?\", \"When learning several $\\\\phi$ (\\\"a batch\\\") how do you precisely use it for generation?\"], \"typos\": \"- many references are incomplete, e.g. \\\"A variational inequality perspective on generative adversarial networks\\\" is ICLR'17, \\\"Improved training of Wassertstein GANs\\\" is NeurIPS'17, etc\\n- ref 'On random graph', R\\u00e9nyi: typo on the accent\\n\\n[1] Coloring graph neural networks for node disambiguation, IJCAI'20\\n[2] Universal Invariant and Equivariant Graph Neural Networks, NeurIPS'19\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
6zaTwpNSsQ2 | A Block Minifloat Representation for Training Deep Neural Networks | [
"Sean Fox",
"Seyedramin Rasoulinezhad",
"Julian Faraone",
"david boland",
"Philip Leong"
] | Training Deep Neural Networks (DNN) with high efficiency can be difficult to achieve with native floating-point representations and commercially available hardware. Specialized arithmetic with custom acceleration offers perhaps the most promising alternative. Ongoing research is trending towards narrow floating-point representations, called minifloats, that pack more operations for a given silicon area and consume less power. In this paper, we introduce Block Minifloat (BM), a new spectrum of minifloat formats capable of training DNNs end-to-end with only 4-8 bit weight, activation and gradient tensors. While standard floating-point representations have two degrees of freedom, via the exponent and mantissa, BM exposes the exponent bias as an additional field for optimization. Crucially, this enables training with fewer exponent bits, yielding dense integer-like hardware for fused multiply-add (FMA) operations. For ResNet trained on ImageNet, 6-bit BM achieves almost no degradation in floating-point accuracy with FMA units that are $4.1\times(23.9\times)$ smaller and consume $2.3\times(16.1\times)$ less energy than FP8 (FP32). Furthermore, our 8-bit BM format matches floating-point accuracy while delivering a higher computational density and faster expected training times. | [
"deep neural networks",
"representations",
"block minifloat representation",
"operations",
"accuracy",
"dnn",
"high efficiency",
"difficult",
"native"
] | Accept (Poster) | https://openreview.net/pdf?id=6zaTwpNSsQ2 | https://openreview.net/forum?id=6zaTwpNSsQ2 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"HuBduR4ZmmX",
"UsFf72AJDl",
"-5UXZW608wZ",
"KicWDF7omYC",
"pYHwUaesTTd",
"g4Jir8CZZ6a",
"Wgf1zfl6SqZ"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040471371,
1605824399767,
1605823728843,
1605823023298,
1603901476892,
1603867682880,
1603817611995
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3654/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3654/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3654/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3654/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3654/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3654/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes a new approach to training networks with low precision called Block Minifloat. The reviewers found the paper well written and found that the empirical results were sufficient. In particular, they found the hardware implementation was a strong contribution. Furthermore, the rebuttal properly addressed the comments of the reviewer.\"}",
"{\"title\": \"Response to \\\"Official Review 3\\\"\", \"comment\": \"We would like to thank the reviewer for their valuable feedback and comments. We have uploaded a revised draft with more information and improved clarity based on suggestions and feedback received. Below, we provide answers and responses to specific questions and issues raised.\\n\\n#1. \\u201calignment of minifloat distribution and value distribution requires more precise explanation\\u201d. This is indeed an important aspect of our work. In block minifloat, the shared exponent bias is calculated to align the minifloat distribution with the maximim exponent in the value distribution. The idea is to avoid overflow, which contributes the largest errors in dot-product computations. We have added Equation 3 in the revised draft to formalize this key step.\\n\\n#2. \\u201cEquation 6 requires more explanation\\u201d. This issue was also raised by Reviewer 2 and has been addressed in the revised draft, by Equation 7 now. The question remains how Equation 7 can be shown to match an implemented design. This is possible by defining the resource costs for each overhead (given by $\\\\alpha$ in the text) using actual synthesized area results. For example, $\\\\alpha_{1}$ would be equal to the area for BM multiply-add, and $\\\\alpha_{2}$ would be the area of the floating-point hardware (which also includes the BM quantization and conversion modules). Since only $\\\\alpha_{1}$ has been synthesized we can not use Equation 7 in a useful sense. Furthermore, Equation 7 is not designed to provide very accurate resource/area estimates since RTL synthesis tools are typically difficult to model. Rather, Equation 7 is expected to guide the choice of N, by capturing the main trend.\\n\\n#3. \\u201cExperiments and results are confusing\\u201d. This has been updated in the revised draft with appropriate definitions given for columns of the numerical representation (i.e. w, x, dw, dx). Additionally, BM7 has been added to the main results table from the Appendix.\\n\\n#4. \\u201cNotation issue with block minifloat definitions\\u201d. The original notation was indeed wrong. We have redefined $X_{i}^{a}$ to hopefully clear this up.\\n\\n#5. Flexpoint refers to a 16-bit block floating-point format with a 5-bit shared exponent. Block minifloats are considerably smaller, between 4 and 8 bits with an 8 bit shared exponent, and use minifloat rather than fixed-point as the underlying block representation. In terms of how the shared exponent is updated, Flexpoint proposes an algorithm called Autoflex to predict the output exponent, from a history of maximum exponents, and use the prediction to preemptively increase the exponent to avoid overflows. Our exponent update scheme is simpler and is calculated by the maximum exponent at that current point in training.\\n\\n#6. We simulate the behaviour of BM hardware using GPUs and Pytorch. Therefore, the accuracy is taken from ImageNet experiments running on the GPU. Our RTL code is only capable of simulating the BM dot-products, not the entire system.\"}",
"{\"title\": \"Response to \\\"Official Review 2\\\"\", \"comment\": \"We would like to thank the reviewer for their valuable feedback and comments. We have uploaded a revised draft with more information and improved clarity based on suggestions and feedback received. Below, we provide answers and responses to specific questions and issues raised.\\n\\n1. \\u201cminor issue about the specification of the BM configuration\\u201d. All BM formats are hybrid, meaning two formats are specified. BM8 always refers to (2,5)/(4,3), like BM6 always refers to (2,3)/(3,2). We\\u2019ve tried to clear this up in the text in \\u201cSection 3, Training with Block Minifloat, Hybrid representation\\u201d.\\n\\n2. \\u201cmajor issue about three BM formats and associated overhead of the arithmetic unit\\u201d. Firstly, the (6e, 9) format is only applied to the weight gradients which does not require matrix multiplication. This means that the (6e, 9) does not involve computations mapped to block minifloat hardware (which is focused on dot-products in the GEMM). The issue now relates to the overhead for supporting (2e,5) and (4e,3) formats together, which have 2-bits mismatched. This primarily affects the mantissa multiplier, which must be large enough for the two largest mantissa operands. Put another way, the mantissa multiplier is not 5x3, rather it must support 5x5 (i.e. for weight*activation in forward path). Importantly, this has already been factored into our hardware as well as synthesis results. For some more clarity, Table 9 (original draft) or Table 11 (revised draft) provides component breakdowns of the logic area for (2e,5)/(4e,3) format and specify a 6x6 mantissa multiply. 6x6 instead of 5x5 because of the implicit bit which is added for supporting denormal numbers. We hope our explanation has cleared up this issue.\\n\\n3. \\u201cissue about power modelling\\u201d. The power comes from the Cadence RTL Compiler. It's important to note that these numbers are only rough estimates.\\n\\n4. log-BM was included because we wanted to show that block minifloats cover the spectrum of formats between fixed-point and log representations at the corner cases. We compared VGG-16 with (Miyashita et al. 2016) since this is our only known example of logarithmic training. We haven\\u2019t tested BM5 or BM4 on VGG16, though we do expect similar if not better results than log-BM5 and log-BM4. This could be an interesting comparison point if we do in fact achieve better results (all be it on an outdated network). We have not had an opportunity to include this in our revised draft, but will consider the merits of this for any subsequent revision. \\n\\n5. \\u201cissue about confusing Equation 6\\u201d. We agree. Our revised draft provides a much more thorough explanation for each term in what has now become Equation 7.\"}",
"{\"title\": \"Response to \\\"Official Review 1\\\"\", \"comment\": \"We would like to thank the reviewer for their valuable feedback and comments. We have uploaded a revised draft with more information and improved clarity based on suggestions and feedback received. Below, we provide answers and responses to specific questions and issues raised.\\n\\n#1. The main reason for \\u201cup to 4-bit exponents\\u201d is because in these regimes block minifloat representations (with Kulisch accumulation) have INT8-like hardware complexity (i.e. 8-bit multiply and 32-bit accumulate). INT8 is important because it represents the most efficient arithmetic scheme capable of training tasks like ImageNet to an accuracy that is close to FP32. Put another way, the Kulisch accumulator is prohibitively wide and expensive for formats with larger exponents. This becomes clear when comparing kadd for different formats, but more importantly, it is supported by our hardware synthesis results of different formats. We have updated \\u201cSection 2.3 Kulisch Accumulation\\u201d in the revised draft with a similar explanation. Also, we have included Table 11 in the Appendix which provides a better comparison for kadd and related area costs.\\n\\n#2. The Kulisch accumulator is always sized to calculate an error free sum of products. On the otherhand, FP32 accumulators may produce small rounding errors when adding very large and small numbers together. In such cases, the accumulator may not have enough mantissa bits (i.e. 23 in FP32) to tolerate large right shifts. Given that most DNNs train successfully with FP32 accumulation, it is reasonable to assume that FP32 is a very close approximation to an error-free Kulisch accumulator.\\n\\n#3. \\\"issue regarding denormal overheads\\\". In our estimation, the hardware overhead for supporting denormal numbers is small. Only one extra bit (i.e. the implicit bit) must be incorporated into each mantissa for multiplication. This does incur some overhead, though the resultant arithmetic units are typically still very small. We cite our synthesis results to support this claim. Furthermore, there is no additional complexity in the addend or result of the MAC unit because these are represented in fixed point. To determine whether a partial sum (for a given dot product) is norm or denorm, the value must be converted to a floating point representation, using standard fixed to float conversion hardware, where the number will be denorm if the floating-point exponent encoding is 0 (as in Equation 1). \\n\\n#3. Conversion from fixed point to floating-point (and subsequently Block Minifloat) involves the following steps. 1.) A leading-one detection module is required to determine the position of the most significant bit, 2.) the fixed point number must be shifted so the decimal point is at the most significant bit, 3.) the position (which is a bit index and location of decimal point) should be compared to the representable range of the floating point or minifloat format. This involves conditional logic to determine whether the number should be saturated, flushed-to-zero, denorm or norm. 4.) A denorm number exists if the position is equal to the minimum exponent in the minifloat format (after the bias applied). In such cases, the decimal point must be left shifted by one more bit, 5.) The last step involves quantization of the fractional bits (or mantissa) using stochastic rounding. This can be implemented with a multiplier and linear feedback shift register (LFSR). We have provided a little bit of context to this description in \\u201cSection 5 Hardware Evaluation\\u201d of the revised draft. This is to supplement the main analysis of block minifloat arithmetic units.\\n\\n4.) This was the plan. We started ResNet50, but the training time proved to be a bottleneck for effective testing and exploration. We instead focused on smaller networks that we feel are more suitable to the targeted embedded domain anyway, where stricter computational power and area constraints exist.\"}",
"{\"title\": \"A good submission but need to provide more information\", \"review\": \"This paper introduced a new representation (Block Minifloat) for training DNNs with low precisions of 8-bit or less. This new representation combines FP8 formats and the shared exponent bias concept to cover the dynamic range of tensors needed for DNN training. Compared to other published FP8 format, this representation has smaller exponents, which allows to use a more efficient Kulisch accumulator. The representation has been verified on a spectrum of deep learning models and datasets.\\n\\nOverall, the paper is well written. The idea is clearly presented, and experiments are sound. Particularly, the hardware evaluation gives some impressive results. However, for the sake of clarity, I have some questions.\\n\\n1). The shared-exponent bias itself is not new and use it for FP8 training is also straightforward and has limited novelty. What interesting is that the author uses this method to push for smaller exponent bits which in turn allows a more efficient accumulator. However, as a key contribution of this work, the authors did not give enough information and details on why \\u201cexponents up to 4 bits offer distinct advantages\\u201d for Kulishch accumulation. Could the authors explain more on the accumulator and provide some evident why smaller exponent is critical?\\n\\n2). From emulation point of view, the authors use existing CUDA libraries for GEMM which basically uses standard FP32 floating point accumulation. When the authors use certain bit-width for Kulish accumulator elements (for e.g. Table 9 and Table 10), how do you know this accumulator setting won\\u2019t impact model convergence? \\n\\n3). Hardware overhead for denorm support: the exponent bias is only guard against overflow, so denorm numbers are used to cover the range of smaller numbers. The authors claim that the hardware overhead is minimum since only input multiplicands are needed for the detection of denorm. However, there should be additional complexity than that. For example, how to handle denorm numbers in addend and how to tell the result produced is norm or denorm? Could the authors describe the hardware that needed to convert numbers from an intermediate format or a floating point format to their BM format?\\n\\n4). For experiment, it would have been good to include some larger networks, such as ResNet50 on ImageNet to compared to SOTA.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Simple extension of block floating point, interesting contribution on the hardware aspects\", \"review\": \"The authors proposes block-minifloat (BM), a floating-point format for DNN training. BM is a fairly simple extension to block floating-point (BFP), which was proposed in (Drumond 2018 and Yang 2019). In BFP, a block of integer mantissas share a single exponent. In BM, a block of narrow floats share a single exponent bias. The shared exponent bias helps to shorten the exponent field on each individual float element. This is a good contribution, though a bit trivial.\\n\\nWhere the paper make a strong contribution is in the hardware implementation of BM, something which neither Drumond or Yang really got into. The authors propose to use a Kulisch accumulator for minifloat dot products, which basically works by converting the floats to integers with a shift, and accumulating with a very wide register. Kulisch accumulators are normally far too wide to be practical (see Eq. 4), but they've been proposed for posit computation (https://engineering.fb.com/2018/11/08/ai-research/floating-point-math/). This seem like a great idea here since BM can reduce the exponent length to only 2 or 3 bits.\\n\\nThe authors also did a good job evaluating the area and power of the BM hardware circuit. They built a 4x4 systolic array multiplier using BM units in RTL, and synthesized to place and route. The results show that BM6 can be 4x smaller and use 2.3x less power than FP8 while achieving to comparable accuracy. This is a pretty impressive result, and the hardware evaluation methodology is more stringent than most quantization papers at NeurIPS/ICML/ICLR. The only **minor** issue I have here is the area/power numbers are reported for BM8/BM6, but the exact config is not specified. E.g. is BM8 referring to (2e, 5m)?\\n\\nThe accuracy comparison is pretty standard, with CIFAR and ImageNet results using mostly ResNet-18. The authors' simulation framework slows training by 5x, so this is as much as I would expect. One **major** issue is that Tables 1 and 3 shows that for training to succeed, the forward and backwards BM formats must be different. Table 3 has three separate BM formats for each row. Implementing them all in hardware could incur significant overhead, which the paper doesn't discuss. The authors mention that the HFP8 paper does the same - but that paper defends this practice by showing that their two formats (which only differ by 1 e/m bit) can be supported by a single FP unit with minimal overhead. This paper uses (2e,5m), (4e,3m) and (6e,9m) in the same experiment labeled \\\"BM8\\\", which seems both misleading and unjustified. Note that SWALP and S2FP8 (and bfloat16/float16 papers) would use the same format in forwards and backwards pass and avoid this overhead.\", \"a_few_other_insights\": \"(1) subnormal floats are important and can't just be flushed to zero; (2) a square BM block size of 48x48 seems to work fine.\", \"minor_issues\": [\"The methodology for hardware area seem solid (Appendix 4), but there isn't much detail on power. Was power obtained through modeling or using an RTL simulator? What kind of test vectors were used?\", \"The area/power numbers are given for \\\"BM8\\\", but what's the precise format? I assumed it was (2, 5).\", \"The introduction of log-BM seems very sudden, and they're only used for VGG-16? Did regular BM5 not work? I'm not sure what to take away from the comparison in Table 2.\", \"Equation 6 was a bit confusing for me. It would be helpful to explain briefly how each term was derived.\", \"Training in BM requires you to update the exponent biases in each step (?), which requires computing the dynamic range of each $N \\\\times N$ block. I believe this is probably negligible, but it should be discussed as an additional overhead.\"], \"edit\": \"the authors have clarified that the hardware area results take into account the need to support multiple formats, which addressed my biggest issue with the paper. I have raised my score to a 7 (accept).\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"New numerical representations for training DNNs with very few bits, with impressive results. Some details need more explanation.\", \"review\": \"This paper proposes a family of numerical representations for training neural networks based on minifloats that share a common exponent across blocks. The authors perform a lot of software simulations to explore the design space on many different models and tasks. Hardware designs have also been synthesized and reported.\", \"pros\": [\"The proposed representation is very general and covers a lot of existing designs, while also allowing for new ones.\", \"An exhaustive exploration of the design space, and the discovery of some new low-precision representations that offer high accuracy as well as computational density.\", \"A lot of different models and datasets are examined.\", \"Cons\", \"Some of the main contributions require a more careful, detailed explanation in order to fully appreciate and assess the contribution.\", \"Results are presented in a confusing way.\"], \"main_areas_for_improvement\": \"One of the important contributions of the paper seems to be how the minifloat distribution is aligned with the value distribution and how $\\\\beta$ is calculated to avoid underflow. However, this topic is only given a couple of sentences of explanation together with Figure 2, which does not provide much more information. I think the paper would be improved with a precise mathematical explanation of this re-alignment.\\n\\nThe block size for sharing exponents is determined semi-analytically by using Equation 6 to find a balance between the area cost and the dynamic range of the resulting numerical representation. However, this equation is introduced rather abruptly. The paper would be improved by providing more explanation of how Equation 6 was derived and some intuition into what the different terms mean. Furthermore, given the authors have implemented the block minifloat scheme in hardware, is it possible to show that Equation 6 actually matches what is seen when synthesizing the design? \\n \\nThe tables in Section 4, and the corresponding text, need some work. In Table 2, the footnote (2) is defined but seems to be unused. In Table 2, the different columns (w, x, dw, dx, acc) are not defined anywhere and it is also not clear what the footnote (2) means in this context since it applies to the \\u2018acc\\u2019 column across all schemes. Additionally, in the corresponding text the authors discuss the performance of BM7, but this scheme is not found in the table.\", \"additional_comments_and_questions\": [\"In equation (1) the sign bit $s$ is defined before the equation, but exponent $E$ and mantissa $F$ are not defined until the sentence afterwards.\", \"In equation (2) it is not immediately clear what the index $i$ is meant to indicate. Furthermore, $X_i^a$ seems to indicate that the definition of $a$ depends on itself.\", \"Aside from using minifloats, how does this work compare to Flexpoint [1], which takes a similar approach to shared exponents?\", \"In Figure 6 \\u2013 are the accuracy measurements actually taken from the hardware simulation?\", \"[1] K\\u00f6ster, Urs, et al. \\\"Flexpoint: An adaptive numerical format for efficient training of deep neural networks.\\\" Advances in neural information processing systems. 2017.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
RayUtcIlGz | Training Invertible Linear Layers through Rank-One Perturbations | [
"Andreas Krämer",
"Jonas Köhler",
"Frank Noe"
] | Many types of neural network layers rely on matrix properties such as invertibility or orthogonality.
Retaining such properties during optimization with gradient-based stochastic optimizers is a challenging task, which is usually addressed by either reparameterization of the affected parameters or by directly optimizing on the manifold.
This work presents a novel approach for training invertible linear layers. In lieu of directly optimizing
the network parameters, we train rank-one perturbations and add them to the actual weight matrices infrequently. This P$^{4}$Inv update allows keeping track of inverses and determinants without ever explicitly computing them. We show how such invertible blocks improve the mixing and thus the mode separation of the resulting normalizing flows. Furthermore, we outline how the P$^4$ concept can be utilized to retain properties other than invertibility. | [
"Parameter Perturbation",
"Reparameterization",
"Invertible Neural Networks",
"Normalizing Flows",
"Rank-one update"
] | Reject | https://openreview.net/pdf?id=RayUtcIlGz | https://openreview.net/forum?id=RayUtcIlGz | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"K_5-6pVInf5",
"ULouokHp41c",
"pt3LtiAg0S",
"H4RSrT83DlI",
"-XV8cA7l4_f",
"DfasPJRlmAl",
"FHl-eCo0ICl",
"6se4Fjemu-w",
"vVG46522n9c",
"FOwNrGzxJ4z",
"Ypay74WVDiD",
"-1RKMsN6j77"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040404515,
1606301952766,
1606243218185,
1606228422693,
1606228141981,
1606228060914,
1606228023271,
1606227953694,
1603921147505,
1603803406305,
1603282187603,
1602588754074
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3653/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3653/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3653/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3653/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3653/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3653/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3653/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3653/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3653/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3653/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3653/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper discusses a method to update/optimise invertible matrices via low-rank updates. The key property of the proposed method is that it keeps track of the matrix inverse and its determinant through the optimisation (with updates that are much cheaper to compute than a direct inversion/determinant computation).\\n\\nWhile the method of performing low-rank updates for invertible matrices itself has already been extensively studied in the literature as pointed out by reviews, this work focuses (after extensive revision) on the properties of this update method.\\n Since the updates may leave the manifold of invertible matrices, a numerical stabilisation step was introduce whereby updates that produce ill-conditioned matrices are rejected during optimisation.\\n\\nRank-one updates allow for fast update of matrix inverse and determinants. So this is particularly interesting when applied to normalising flows, as it allows for cheaper computation of the log-det-Jacobian terms. \\n\\nThe novelty of this approach is rather limited (as also pointed out by R2). The experiments and, in particular, the application to normalising flows are interesting, well-executed. It is not clear if there are advantages of the method in other domains where log-det-Jacobians are not necessary relative to existing literature.\"}",
"{\"title\": \"Invalid merges are rejected!\", \"comment\": \"Merges are rejected whenever the rank-one update is ill-conditioned (Section 3.3). It is shown throughout the paper that this stabilizes the update so that the actual (frozen) weight matrix remains on the manifold. This is proven in the experiments. In fact, this stabilization step is one of the central aspects of the paper and one of the main reasons to choose a stepwise reparameterization approach. The ability to leave (and reenter) the set of feasible parameters between merging steps is in contrast to existing methods, particularly dynamic trivializations.\"}",
"{\"title\": \"Then they are not property preserving?\", \"comment\": \"According to the whole paper, the authors argue that they study property preserving transformations, and they choose $\\\\operatorname{GL}(n)$ to be their set, but now you have confirmed that the image of the low-rank transformation is not even in $\\\\operatorname{GL}(n)$, so that it is not \\\"property preserving\\\" not even in this __very__ weak sense. As such, I think that these statements in the paper that eq. (5) is $P^4$, that is, image in $\\\\operatorname{GL}(n)$ is wrong. This being the cornerstone of the paper, I think that it disqualifies the paper for being accepted.\\n\\nI prompt the authors to rethink the properties of their optimisation method, and try to match it with previous literature in the field, as the idea of parametrising sets of matrices to perform optimisation (what they call $P^4$ framework) is by no means new. Even more, I believe that they should rethink what is the image of the map that they use since, as it stands, it is not even contained in the set that they want to perform optimisation on.\"}",
"{\"title\": \"Marginally above the acceptance threshold.\", \"comment\": \"I thank the authors for their response, as promised I have raised my score to 6 thanks to the modifications made to the manuscript.\"}",
"{\"title\": \"Major Revision\", \"comment\": \"We thank the reviewer for those insightful comments. We are especially grateful for having been pointed to the dynamic trivializations introduced by Lezecano-Casado, which we had missed in our literature review. Since we agree that the P4 update scheme is closely related to dynamic trivializations, we have thoroughly revised the manuscript to take into account previous work by Lezcano-Casado. This is now mentioned in the introduction and discussed in more depth in Section 2.4. Note that due to the comments by all reviewers, the focus of the paper has shifted towards optimization of invertible matrices rather than the general P4 algorithm, which also underlines its novelty with respect to dynamic trivializations. Please find the responses to the detailed comments below.\\n\\n1. This sentence was removed since it no longer fit in with the shifted focus of the paper. What we meant to say there was that rank-one or Householder perturbations only require a few matrix vector multiplications in the forward pass, which is computationally cheap. In contrast, exponential maps often involve matrix exponentials, which are comparably more expensive.\\n2. Thanks for pointing this out. We have generally augmented the comparison with this previous work by Gresele et al. In this context, this sentence was revised and now reads \\u201cIn contrast to this related work, our method facilitates a cheap inverse pass and allows sign changes in the determinant.\\u201c (See Section 2.2.)\\n3. We reset the metaparameters of the optimizer corresponding to the P4Inv parameters. This is now mentioned in Appendices G and H.\\n4. Facilitating cheap inversion and determinant computation is exactly the point of the method. This is particularly interesting for normalizing flows that are trained via a combination of the inverse and forward KL divergence (density estimation + sampling) as in our Section 4.4. This aspect is now highlighted in the introduction and amplified by the new Section 4.1 on computational cost.\\n5. The reviewer is right that rank-one perturbations are not sensible if the goal is just to retain invertibility. The main reasoning behind this choice is that it allows cheap inversion. This is the point, where our approach deviates from dynamic trivializations. While dynamic trivializations are primarily meant to stay on the manifold, our main concern is how to retain knowledge of the inverse and determinant. This point is now highlighted in the introduction and mentioned again throughout the manuscript.\", \"other_decompositions\": \"Due to this comment, we have compared the computational cost of P4Inv layers to LU factorizations in Section 4.1. Parameterizations based on triangular matrices are fairly inefficient on parallel computers, while the P4Inv layers are inverted at the cost of 3 matrix vector multiplications.\\n\\nWe have also revised our references to include the suggestions of the reviewer.\\n\\nWe hope that these revisions mitigate the concerns and that the reviewer deems the updated version of the manuscript acceptable for the conference.\"}",
"{\"title\": \"Major Revision\", \"comment\": \"We thank the reviewer for the positive evaluation and the valuable suggestion. Note that the manuscript underwent a major revision to incorporate comments by the other reviewers.\\n\\nRegarding the reduced search space dimension, we have included the suggested experiment as Appendix I, which is briefly discussed in the main text in Section 4.2. Please note that we have to revise our initial answer to the comment. At first, we did not recognize that scaling the standard deviation of the reinitialization essentially amounts to a change in learning rate. This aspect is now mentioned in Section 3.2.\"}",
"{\"title\": \"Major Revision\", \"comment\": \"We thank the reviewer for those valuable suggestions. The manuscript underwent a major revision to address all comments.\\n\\n1. As suggested, we shifted the focus of the paper so that the whole paper, including title, abstract, and introduction are now centered around invertible layers. Potential applications to other properties are briefly discussed before the conclusions.\\n2. The main advantage with respect to the paper by Gresele et al. is the cheap inversion during training. This aspect is now highlighted in the introduction and in Section 2.2. We also mention a potential drawback of our method induced by the lower-dimensional search space to provide a fair comparison.\\n3. A new subsection (4.1) has been added to the experiments discussing solely the computational cost. We felt that a comparison with a standard layer would not be fair, since the main advantage of the method is the cheap evaluation of determinants and inverses, which is not a part of standard training. As for this (unfair) comparison, the standard training was approximately a factor of 8 faster in all linear test problems. Instead we compared the computational cost of P4Inv layers with other methods to parameterize inverses and determinants.\\n4. The subsection on computational cost also includes LU factorizations.\\n\\nThe minor comments were also valid and helpful. We hope that the reviewer might reconsider his or her score and rate the revised version of the manuscript acceptable.\"}",
"{\"title\": \"Major Revision\", \"comment\": \"We thank the reviewer for those helpful comments.\\nDue to comments by all reviewers, we have decided to explicitly put the focus of the manuscript on invertible layers rather than the general update scheme. This is also reflected in our revised title and abstract and generally helps the organization of the methods section. We have also added further experiments: (a) a runtime comparison between P4Inv layers, standard linear layers, and LU decompositions in Section 4.1 and (b) training of an MNIST classifier through rank-one updates in Appendix I. We are confident that these experiments further underline the capabilities and limitations of our approach. \\n\\nRegarding the reviewer's specific questions:\\n* Dimension of rank-one updates: We agree that each rank-one update has an intrinsic dimensionality of 2n - 1, while it is represented by 2n parameters in our reparameterization. We tested a version of the algorithm, where ||v|| was constrained to unity. This did not prevent premature convergence and also altered the optimal learning rate. When reinitializing v from Gaussian Noise with sigma=1 (as we did in our experiments), gradient updates to u are of the same magnitude as direct gradient updates to A so that the same learning rate can be used for standard training and P4Inv training. This is reflected by the fact that P4 training converges with the exact same rate as standard training on linear test problems (see Figure 3).\\n* Local minima: We have added the results of density estimation of 2D distributions through via the inverses of P4Inv layers to Appendix G. The networks are still capable of approximating the optimal solution, although the match is inferior to forward P4Inv layers.\\n* Figure 4: This Figure shows that sign changes in the determinant are possible in P$^4$Inv training without permanently hurting the inverse. As noted by Papamakarios et al. (arXiv:1912.02762), a continuous update rule cannot parameterize all of GL(n), since determinants can only change sign by passing through zero, where the matrix is singular. Figure 4 shows that passing through zero is possible in discrete steps.\\n* Thanks also for catching some minor inaccuracies and typos.\\n\\nWe hope that the paper is deemed acceptable with those changes.\"}",
"{\"title\": \"A method to optimize invertible matrices through rank one perturbations\", \"review\": \"This paper introduces an algorithm for training neural networks in a way that parameters preserve a given property. The optimization is based on using a transformation R that perturbs parameters in a way that the desired property is preserved. Instead of directly optimizing the parameters of the network, the optimization is carried out on the parameters B of the auxiliary transformation R.\\n\\nThe method is (only) exemplified with the particular case where one needs to optimize a network with the property of having invertible layers (which is an important use case for example for normalizing flows, and invertible networks). In this particular case, the paper shows that the transformation of the parameters can be cast as a perturbation of rank one matrices using closed-form formulas that can be used to check and guarantee the invertibility. The parameters are updated periodically, after a series of perturbations which helps to stabilize the optimization. The paper shows three experiments on two synthetic datasets (linear data and 2d manifold) and one on Boltzmann generators to generate samples of Alanine Dipeptide.\\n\\nThe paper presents an interesting idea and some empirical evidence showing promising results. My main concern with the paper is that the experimental evidence is quite limited so it is hard to judge the real contribution of the method. Additionally, the paper could improve the organization.\\n\\nIn particular, Section 2 needs a better organization. The title mentions a general idea, but in practice only the case of invertible matrices is analyzed and discussed. Section 2.2. is rather disconnected from Section 1.1. There's no motivation why this is there. (Maybe this should be moved to a subpart of Section 2.3). Same applies to Section 2.4 (implementation details?). It might be better to specifically focus and present the method for the the case mentioned in Section 2.5. Also, it seems more reasonable to discuss the related work (Section 3) before jumping into the algorithm details (part of Section 2).\", \"some_questions\": \"-- Update u and v. The optimization in u and v has too many degrees of freedom. For example, you could constrain to always have ||v||=1 without losing anything. Will this help to avoid local minima?\\n\\n-- In the first experiment (linear data) it is mentioned that when training against linear training data P^4Inv can get stuck into local minima, and that this does not happen when using more complex data. Would you mind elaborating a little more on this observation?\\n\\n-- What can be concluded from Figure 4? It seems to me that all the tested methods did a good job here.\", \"typos_and_minor_comments\": \"--Pag2 - B_0 = ids. I think this should be R_{B_0} = ids\\n\\n--Pag6 - B=-I_{101}, I assume this should be A=-I_{101}.\\n\\n--Pag7 - \\\"since it with a given\\\" (writing)\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Parameter perturbations for building invertible matrices applied to density estimation\", \"review\": \"# Summary\\nThis work proposes parameter perturbations for the training of neural networks while preserving some properties. It particularly focuses on the case of invertible layers and demonstrates improvements over classical architectures for density estimation with normalizing flows. It is empirically shown that parameter perturbations allow for sign changes in the determinant.\\n\\n# Major comments\\n## Pros:\\nThis work is original and tackles an important problem, that is learning invertible matrices. I think the approach is original and the fact that it could be applied for other properties makes the development and the proof of work very interesting. \\nAt some point you discuss some weaknesses of your work, I really liked that as it will definitely help people using your method. The paper is clearly written and so easy to follow. The experiments are insightful and help the reader to get a better feeling of what is happening at training time. Some results suggest that using your method clearly helps for certain density estimation tasks. Additional experiments could maybe show that P4 + Bent leads to efficient and expressive normalizing flows.\\n\\n## Cons:\\n1) The focus of the paper is not really on parameter perturbations for training neural networks in general. Most of the paper discusses how perturbations can be applied for learning invertible matrices. Introducing the paper as if it could be applied successfully for any kind of properties does not make a lot of sense to me as only experiments on invertible matrices is presented.\\n2) The paper discusses some concurrent methods for learning invertible matrices such as https://arxiv.org/pdf/2006.15090.pdf. However, it is not clear which method is preferable in which situation. What is even more embarrassing is that no comparison at all is performed against this work.\\n3) A serious discussion about the computational cost of using perturbation instead of another method for learning an invertible matrix is lacking.\\n4) The benchmark you are using to assess the performance gain on the density estimation task is a bit odd to me and does not allow the reader to really assess the usefulness of your method in such a setting. \\n\\n## What could be done to address my comments\\n1) Either perform additional experiments showing the applicability of your method for a large bunch of interesting properties (e.g. orthogonality). Instead, I would suggest clarifying the title, the abstract, and the intro. Adding a discussion about other interesting properties that could be kept with your method between the experiments and the conclusion would make sense to me. As is I feel like you're selling something more generic than what you really show with your experiments whereas I believe learning invertible matrices is already an important and interesting achievement.\\n2) Add a comparison with the referenced paper. And discuss the potential advantage of each method.\\n3) Discuss the computational cost. As an example how long is the training of P4 inv in comparison to linear in fig 3.\\n4) Try to discuss the more general advantage of using learned invertible linear layers in flows instead of predefined PLU factorization. I don't think benchmarking your \\\"flow\\\" with respect to the UCI datasets will really improve the quality of your manuscript but it would help the reader to get a sense of some possible additional expressivity gained with your method.\\n\\nIf you address comments 1, 2, and 3 (or convince these are not important), I will raise my score. Addressing 4 is less important but it would definitely improve the manuscript's quality.\\n\\n# Minor comments\", \"pages_2\": \"B_0=id_S -> R_{B_0} = id_S ?\", \"page_3\": \"Could clarify/elaborate on the last sentence just before algorithm 1.\", \"page_5\": \"I would avoid classifying NFs into two categories as you do.\", \"page_6\": \"Beginning, \\\"quadratic matrix\\\" -> \\\"square matrix\\\" ?!\\n4.3: \\\"since it with\\\" ?\\nWhy not just using elu, sigmoid, or than instead of Bent identities? Could you explain why if it is important?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review: Nice idea and very clear writing, uncertain about concern raised by reviewer 2.\", \"review\": \"### Update:\\nI'd like to thank the authors for carefully addressing my concerns.. \\n\\nReviewer 2 claims $P^4$ training is a special case of previous work. It seems the authors agree at least to some extent *\\\"we agree that the P4 update scheme is closely related to dynamic trivializations\\\"*. It is not clear to me how harshly this should be penalized. In some cases, it is interesting to research special cases of known general results. Unfortunately, it is hard for me to judge if this is the case here, as I know of the previous articles reviewer 2 refer to, but I have not studied them carefully. As a result, I am decreasing my confidence from 3 to 1 and my score from 7 to 6. \\n\\nI hope this does not discourage the authors, and wish them good luck with future research. \\n\\n___\\n\\n### Summary \\n**Objective:** train neural networks while retaining properties like invertibility or orthogonality of layers. \\n\\n**Approach:** instead of optimizing normal weights optimize an rank 1 update. Occasionally, the rank 1 updates are merged with the network parameters. \\n\\n### Strengths\\n**[+]** Preserving properties like invertibility is important and has been studied by much previous work. The authors present a novel approach based on rank 1 updates, which, to the best of my knowledge, is completely novel. \\n\\n**[+]** The article is very clearly written, it seems to me that the authors spent a great deal of time polishing the paper. \\n\\n### Weaknesses\\n**[-]** I have a minor concern wrt. optimization of rank 1 updates which I elaborate below. \\n\\n### Recommendation: 6\\n**[+]** The paper presents a novel approach for preserving invertibility, which could benefit many deep learning researchers. \\n\\n**[+]** The paper demonstrates how rank 1 updates can be used in deep learning, which I believe will inspire further research into this interesting direction. \\n\\nI condition my recommendation on an MLP experiment I already discussed with the authors before submitting this review. Furthermore, I'll re-evaluate my conviction after reading the comments by the other reviewers. \\n\\n### Questions \\nThe following question was already answered by the authors before the submission of this review. I repeat the question here for completeness. \\n\\n---\\n\\nRecent research explore the idea that SGD variants perform better with networks that has more parameters. Informally, it is argued it is harder to get stuck at local minima when SGD can move in more directions. If one optimize rank 1 updates, SGD would have $2d$ directions to move in instead of $d^2$ directions. I am concerned this might impacts the performance of SGD. This concern would be address by the following experiment: train two MLPs on MNIST, one with SGD and one with $P^4$, do they attain similar loss curves?\\n\\n---\\n\\n### Additional Feedback\\nI didn't find any typos, the article seems to be very polished.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"The general framework is not new\", \"review\": \"***Summary and general comments:***\\n\\nThis paper presents a method to parametrize a set of constraints via a linear space that may change. This method was already studied in much more depth in [1], where this idea is explored through the lens of vector bundles and retractions on them, with a convergence result appearing in the follow-up work [2].\\n\\nThe authors then instantiate these ideas in the setting of normalizing flows via the Sherman\\u2013Morrison inversion formula.\\n\\n***Questions:***\\n\\n1) For instance, parameterizing the whole orthogonal group (via the Cayley parameterization or matrix exponentials of skew-symmetric matrices) is computationally expensive. In contrast, orthogonal matrices can be cheaply perturbed into other orthogonal matrices using double Householder transforms or Givens rotations.\\n\\nWhy is that the case? Could you please justify the first assertion?\\n\\nWhat do you mean by \\\"it can be cheaply perturbed into an orthogonal matrix\\\"? The best orthogonal approximation to a given matrix is the orthogonal projection given by the polar decomposition. That is expensive.\\n\\n2) \\\"In contrast to this related work our method reparameterizes the update step during gradient descent rather than the parameter matrix itself\\\"\\n\\nThis is not really different as explained in [1]. Using a dynamic trivialization changes the metric that you are working with, but you can still use it as a parametrization. In your example, it would account for changing the problem\\n$$\\\\min_{x \\\\in \\\\operatorname{GL}(n)} f(x)$$\\nto\\n$$\\\\min_{u,v \\\\in \\\\mathbb{R}^{2n}} f(X_0 + uv^\\\\intercal)$$\\nfor a fixed $X_0$ and changing the problem via the dynamic trivialization framework every $N$ steps. Using this formulation, you can let the optimizer do all the heavy lifting for you. See also 5)\\n\\n3) Trivializations are known to have a problem with the momentum and adaptive term. I see that you use Adam in the training of the 2D distributions. How do you solve the problem of the incorrect momentum and adaptive terms when you change the base?\\n\\n4) A general question. I do not understand what is the motivation behind using the low-rank update besides the fact that it allows for an (amortized) low cost of the inversion.\\n\\n5) A follow-up of the previous question: How is $R_{X}(u,v) = X + uv^\\\\intercal$ a sensible map to use? This map does not have its image on $\\\\operatorname{GL}(n)$ but on all $\\\\mathbb{R}^{n\\\\times n}$! For example, $R_{-uv^\\\\intercal}(u,v) = 0$ which is clearly not invertible. As such, I do not understand the claims in section 2.3 about this being a \\\"fully flexible invertible layer\\\". This is the reason behind having to use Algorithm 2 in P\\u2074Inv. Given that Algorithm 2 is used, what is the reason behind using this parametrization at all over just doing unconstrained optimization?\\n\\n\\n***Citations:***\\n- Lezcano-Casado & Mart\\u00ednez-Rubio do not use Givens rotations but the exponential. The first to use Givens rotations in the context of ML was [3].\\n\\n- When it comes to Riemannian gradient descent, it is probably better to cite Absil's book [4] as a general reference rather than two recent papers, as this is a well studied topic.\\n\\n- I would recommend to clean-up the bibliography, as there are many citations that point to the arXiv when the articles have indeed been published in peer-reviewed venues.\", \"minor\": \"- \\\"which is a R-diffeomorphism\\\" -> \\\"an\\\"\\n\\n***Conclusion:***\\n\\nI really like the first experiment for its simplicity, trying to elucidate the behavior of the layer. It is also nice to see the improvement that this idea gives over RNVP. At the same time, when it comes to the experiments, I believe that it would have been of interest to compare this approach with other known ways to parametrize invertible linear layers, such as those that use QR, SVD or Choleski factorizations.\\n\\nThat being said, as mentioned in 4) and 5) I do not see the reason for this being a good way to obtain invertible layers, given that even the image of the parametrization does not lie in GL(n). Furthermore, the ideas behind the framework presented in this paper were already studied in previous papers in much more depth.\\n\\n\\n[1] M. Lezcano-Casado. \\u201cTrivializations for gradient-based optimization on manifolds\\u201d. NeurIPS, 2019\\n\\n[2] M. Lezcano-Casado. \\u201cCurvature-Dependant Global Convergence Rates for Optimization on Manifolds of Bounded Geometry\\u201d. https://arxiv.org/abs/2008.02517\\n\\n[3] U. Shalit, G. Chechik. \\u201cEfficient coordinate-descent for orthogonal matrices through Givens rotations\\u201d. ICML, 2014\\n\\n[4] P.-A. Absil, R. Mahony, and R. Sepulchre.Optimization algorithms on matrix manifolds. PrincetonUniversity Press, 2009- .\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
9p2ekP904Rs | Representation Learning via Invariant Causal Mechanisms | [
"Jovana Mitrovic",
"Brian McWilliams",
"Jacob C Walker",
"Lars Holger Buesing",
"Charles Blundell"
] | Self-supervised learning has emerged as a strategy to reduce the reliance on costly supervised signal by pretraining representations only using unlabeled data. These methods combine heuristic proxy classification tasks with data augmentations and have achieved significant success, but our theoretical understanding of this success remains limited. In this paper we analyze self-supervised representation learning using a causal framework. We show how data augmentations can be more effectively utilized through explicit invariance constraints on the proxy classifiers employed during pretraining. Based on this, we propose a novel self-supervised objective, Representation Learning via Invariant Causal Mechanisms (ReLIC), that enforces invariant prediction of proxy targets across augmentations through an invariance regularizer which yields improved generalization guarantees. Further, using causality we generalize contrastive learning, a particular kind of self-supervised method, and provide an alternative theoretical explanation for the success of these methods. Empirically, ReLIC significantly outperforms competing methods in terms of robustness and out-of-distribution generalization on ImageNet, while also significantly outperforming these methods on Atari achieving above human-level performance on 51 out of 57 games. | [
"Representation Learning",
"Self-supervised Learning",
"Contrastive Methods",
"Causality"
] | Accept (Poster) | https://openreview.net/pdf?id=9p2ekP904Rs | https://openreview.net/forum?id=9p2ekP904Rs | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"uVdykxpUZD",
"xUtXnI38eBU",
"vU8DQyo-6b",
"8bLct84tYor",
"FzGtFlxnIMT",
"Ory7SMthdnw",
"contffMfion",
"25sFpI8cQW6",
"ZavgwHCK5yU"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040357770,
1606251568185,
1605703342557,
1605703081856,
1605702933314,
1604656891653,
1604281469075,
1603958264132,
1603815086399
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3652/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3652/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3652/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3652/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3652/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3652/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3652/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3652/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The manuscript proposes a causal interpretation of the self-supervised representation learning problem. The data is modeled as being generated from two independent latent factors: style and content, where content captures all information necessary for downstream tasks, and style captures everything that is affected by data augmentations (e.g. rotation, grayscaling, translation, cropping). The main contribution is a specific regularizer for self-supervised contrastive learning, motivated by the assumptions about the data generation.\\n\\nReviewers agreed that the manuscript is oversold on the causal jargon, as was noted, the manuscript does not perform any causal inference. Nevertheless, they think that there is an interesting interpretation of self-supervised learning and the results are noteworthy.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for their comments and suggestions.\\n\\n*Regarding Theorem 1* -- We appreciate that the reviewer has taken a detailed look at our theoretical contribution. The proof as given in Appendix D.2 on page 14 is correct. The first equality *is correct* as $Y^R$ is a refinement of $Y_{t}$ and as such $p(Y_{t}\\\\vert Y^{R})=p(Y_{t}\\\\vert Y^{R}, f(X))$; in fact this independence is our motivation for considering refinements in the first place. This equality is a direct consequence of the construction of refinements and can be seen from Lemma 2. To see this note that discrete random variables (e.g. $Y^{R}$ and $Y_{t}$) induce equivalence relationships on the sample space, i.e. their events partition the sample space into equivalence classes. Since $Y^{R}$ is a refinement of $Y_{t}$, we know that the equivalence relation induced by $Y^{R}$ is finer than the equivalence relation induced by $Y_{t}$. By Lemma 2, we then know that every equivalence class induced under $Y_{t}$ can be constructed from a union of equivalence classes induced under $Y^{R}$. Thus, we have that $Y_{t}$ is a function of $Y^{R}$, i.e. $Y_{t} = g(Y^{R})$ for some $g$ and thus we have that $p(Y_{t}\\\\vert Y^{R})=p(Y_{t}\\\\vert Y^{R}, f(X))$. We will add this clarification in the proof of Theorem 1. We understand that the arrow from $Y_{t}$ to $Y^{R}$ in the causal graph in Figure 1 is potentially confusing as the relationship between $Y_{t}$ and $Y^{R}$ is not causal in nature. We will correct this.\\n\\n*Regarding the causal language of the paper* -- The language of causality is a very useful tool even outside of the domains of causal discovery or causal inference as we show in this paper. First, we use the language of causality to succinctly formalize assumptions about the data generation process. Using the resulting causal graph, by the principle of independence of cause and mechanism (a causal concept) we define the notion of an invariant representation (c.f. Equation 1). This allows us to formulate the invariant prediction criterion for learning (i.e. that $p(Y_{t}\\\\vert C)$ is invariant to distribution shifts of $p(S)$ c.f. Equation 2) which is the cornerstone of our proposed method and motivates our objective. While we could have arrived at our proposed objective through some heuristic (as the reviewer recommends), we believe that having a solid theoretical grounding for our method and objective using causality is a much better approach as it not only provides theoretical guarantees (c.f. Theorem 1), but also enhances our understanding of the problem and opens the door for further improvements.\", \"further_comments\": [\"Thank you for pointing out typos; we will fix them.\", \"The do notation is used to signify the intervention performed on the style variable and its use does not depend on whether the intervened upon node has any parents.\", \"As we discuss in the paragraph on lines 139-147, for the instance discrimination task, pairs of points (x_{i}, x_{j}) are needed and we note that the augmentations that need to be applied to the data also follow the same structure, i.e. they are given as a pair of augmentations (a_{l}, a_{k}). We will improve the notation in Equation 2 to a general set of augmentations so that product sets of augmentations are covered under this notation.\", \"As we comment on in the paper (c.f. Page 4), the choice of data augmentations implicitly defines which aspects of the data are designated as style and content. Note that image classification, object detection and segmentation are highly related tasks (evidenced by good transfer performance between them and the use of common architectures for all these tasks). As such the information needed for object detection and segmentation is to a large degree overlapping with the information needed for image classification. In addition to that, for these tasks, we can also add more task-specific augmentations, see for example Purushwalkam and Gupta \\u201cDemystifying Contrastive Self-Supervised Learning: Invariances, Augmentations and Dataset Biases\\u201d. We will add more discussion on this in our paper.\", \"Refinements have been proposed in the area of causal feature learning and for this reason we call refinements a causal concept. Please consult Chalupka et. al. \\u201cCausal feature learning: an overview\\u201d for more details on refinements.\", \"Regarding comparison to cross-entropy baseline -- Note that we compare against this baseline in our experiments as this baseline corresponds to the SimCLR method.\", \"Visualization of Theorem 1 - We visualize the consequences of Theorem 1 in Figure 2.\"]}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for their positive evaluation of the paper and their comments and suggestions for improvement.\\n\\nThe assumption of two independent variables (content and style) generating the data is common in the literature, e.g. [1, 2]. This assumption is also often implicitly made in a lot of the computer vision literature that relies on data augmentations for training. Note that data augmentations are required to achieve state-of-the-art performance for both supervised and unsupervised recognition tasks. We make use of this assumption because it has already been proposed in other settings and also provides a useful model for the success of data augmentations in a very wide range of scenarios.\\n\\nFor data augmentations, we use the data augmentations proposed in [3] and now widely adopted across self-supervised representation learning methods, e.g. [4]. As mentioned in our paper (on page 4) choosing a set of augmentations implicitly defines which aspects of the data are considered style and content in relation to the downstream task. So with some high-level knowledge of the downstream task (e.g. is it image classification) one can create data augmentations by combining simple transformations such as cropping, horizontal flipping, color distortion and blurring (for a full description of the augmentations used please consult [3] or appendix E.1 in our paper). Only very recently (last few months) has the use of other data augmentation schemas been considered and we plan to test our method also with these data augmentations. \\n\\n\\n[1] Heinze-Deml and Meinshausen, (2019). Conditional Variance Penalties and Domain Shift Robustness\\n\\n[2] Gong et al. (2016). Domain Adaptation with Conditional Transferable Components\\n\\n[3] Chen et al. (2020). A Simple Framework for Contrastive Learning of Visual Representations\\n\\n[4] Grill et al. (2020). Bootstrap your own latent: A new approach to self-supervised Learning\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for their positive evaluation of the paper and their comments and suggestions for improvement. We will make the suggested changes in a future version. We would like to answer specific questions here.\\n\\n**Problem setting:** Y are general targets these could be labels (as we make explicit in section 2) or regression targets etc. We do not consider a multi-environment setup, only different possible tasks from the same underlying p(X).\\n\\n**Choice of augmentations.** Part of the aim of our work is to better understand the role that augmentations play in contrastive learning. For this reason we perform like-for-like comparisons using the same set of \\u201cSimCLR\\u201d augmentations commonly used in a wide range of recent works. We do not propose new augmentations, rather we frame them in the context of interventions on a latent style variable. In the case of the SimCLR augmentations, these are broadly content preserving: it is still possible to visually determine the class of the image after augmentation. \\n\\n**Comparison to strong augmentations.** Methods using a stronger set of augmentations only appeared on arXiv while we were preparing this submission. We have not yet had the chance to thoroughly evaluate the augmentations proposed within because this is concurrent work. While every effort has been made to compare against all recently published and even unpublished concurrent work where possible, the experiments in question are large-scale and require a lot of time to run. That said, we will endeavour to include these comparisons in a revision or the final version. It should be noted that in [1] they note that the introduction of stronger augmentations improves the performance of SimCLR by ~2% on the Imagenet benchmark. We hypothesize that ReLIC will benefit similarly. \\n\\n**Using style for solving the instance discrimination problem.** While the goal of the proxy task is to achieve instance discrimination, this is normally not the final task of interest. We therefore aim to solve the instance discrimination task by learning features which are robust. In fact, as shown in Appendix B there is a trade-off between creating separation between examples using instance discrimniation and ensuring within-class concentration using our invariance regularizer.\\n\\nIn the case of instance discrimination between dogs vs classifying dogs against cats, this does not preclude the use of eye colour as a useful feature. However, learned features are rarely this concrete: we instead expect to learn combinations of abstract features. More generally, one of the strengths of our approach is that it draws an explicit connection between the proxy tasks and the downstream tasks. However, it also suggests that if the proxy and downstream tasks are very misaligned then there is no expectation that Relic (or indeed any contrastive method) would work well. \\n\\n**Using C or f(X) as the representation.** The causal graph represents the idealised case. f(X) is an estimator for C. Were C known exactly we would be able to use this to solve the downstream tasks directly.\\n\\n**RL evaluation.** One of the key use cases for unsupervised representation learning is RL, where rewards can be sparse. We are not the first to propose using contrastive representation learning for reinforcement learning. Most notably [2] (who we compare with and out perform). It is clear that the augmentations we use are useful for learning good visual representations from pixels in order to perform the relatively simple tasks required in Atari. However, we agree that for more complex environments more thought around which interventions to use is required. This is a large open research area in its own right and we defer it for future work. \\n\\n[1] M. Caron, I. Misra, J. Mairal, Priya Goyal, P. Bojanowski, and Armand Joulin. (2020) Unsupervised learning of visual features by contrasting cluster assignments. \\n\\n[2] Aravind Srinivas, Michael Laskin, and Pieter Abbeel. (2020) Curl: Contrastive unsupervised representations for reinforcement learning.\"}",
"{\"title\": \"Response\", \"comment\": \"We thank the reviewer for their time and constructive feedback. With respect to the mentioned references, we thank the reviewer for pointing these out and we shall add some further discussion. We would like to point out however that unlike [2] we are not doing causal discovery. We appreciate that our regularizer has the flavour of consistency regularization from [3] and elsewhere with two important differences: a) we use the KL-divergence rather than L2 because of the probabilistic interpretation of our objective and b) our objective is completely unsupervised. We will revise the manuscript with additional discussion on this.\\n\\n**Constructing refinements:** as we mention in Section 3, the instance discrimination task is an example of the trivial refinement when the downstream task is classification: the \\\"labels\\\" of this task are the identities of the individual data points. Therefore we can always set this up as a proxy task independently of whether we know the downstream labels or not. In other problem settings it might be the case that meta-data relating to each observation is also collected. This might be useful for constructing proxy tasks but not exactly the same as the task of interest. Other refinements might be more appropriate depending on the task which might be constructed using e.g. metadata which is routinely collected in real-world contexts (e.g. EXIF, time and location data from images). \\n\\n**Use of context:** this very much problem specific. There is much work in vision where the aim is to learn representations which are invariant to background and surroundings (e.g. context). Whilst a car has a high probability of being in the road, the edge cases when cars appear off-road would be an important scenario to get right since it might signify an accident. The underlying assumption is that style is not always an accurate predictor and so learning representations invariant to style will ultimately lead to more robust predictions. Our results on ImageNet-C and -R highlight this.\"}",
"{\"title\": \"Strong results, but problems in the formulation\", \"review\": [\"## Summary\", \"This paper takes a causal viewpoint on self-supervised contrastive representation learning. The data is modeled as being generated from two independent latent factors: style and content, where content captures all information necessary for downstream tasks, and style captures everything that is affected by training augmentations. The main contribution is a specific regularizer for self-supervised contrastive learning, motivated by the assumptions about the data generation. The learned representations are evaluated in terms of robustness, classification, and generalization performance on ImageNet and in terms of performance on the RL Atari benchmark. The proposed approach is shown to outperform even very recent competing approaches.\", \"## Pros & Cons\", \"The explicit expression of the assumptions behind the data generation process as a (causal) network. It explains the assumptions made for augmentations and proxy tasks to make sense; it also nicely subsumes previous formulations.\", \"Visualization of the resulting feature space\", \"Performance appears great, the evaluation seems sufficient enough.\", \"Proof of Theorem 1 has problems, fixing it, meant to change the causal graph. Although, this might be fixable (see questions below).\", \"There is in fact no causal language needed for this paper. There is no causal discovery or anything happening. It is really just: Here are my assumptions about the mechanisms of data generation and everything else follows just from statistical independence. For instance, the proposed invariance criterion can also be formulated as the distribution $p(Y_t \\\\mid C)$ being invariant to distributional shifts of $p(S)$; and trying to gain robustness against covariate shifts is nothing new per se. Being explicit about the dependency between downstream tasks is important, but was not correctly stated in the paper.\", \"## Questions and concerns\", \"for intervening in a causal graph on a root note, I don't need the \\\"do\\\" notation. There is no need to cut the graph. So the statements are all trivial from this perspective.\", \"The proof of theorem 1 has problems, I think. Namely, after the first equal, you write $p^{do(s_i)}(Y_t \\\\mid Y^R)$, which should be $p^{do(s_i)}(Y_t \\\\mid Y^R, f(X))$ unless $Y_t \\\\perp f(X) \\\\mid Y^R$. In this case, you can not progress further:\", \"$$p^{do(s_i)}(Y_t \\\\mid Y^R, f(X)) = p(Y_t \\\\mid Y^R, f(X), S=s_i) \\\\neq p(Y_t \\\\mid Y^R, f(X)) \\\\neq p(Y_t \\\\mid Y^R, f(X), S=s_j) = p^{do(s_j)}(Y_t \\\\mid Y^R, f(X))$$\", \"This is because $Y_t \\\\not\\\\perp S \\\\mid f(X), Y^R$, so you can not drop the conditioning on $S$.\", \"However, if $Y_t \\\\perp f(X) \\\\mid Y^R$ holds, the proof would work again. This would be the case when $C \\\\rightarrow Y^R \\\\rightarrow Y_t$, i.e. the refinement task \\\"causes\\\" the downstream task. From the causal graph of Figure 1a, this is not the case. Although this would make sense intuitively, as the instance discrimination task needs more information than the downstream tasks, and you even state something in this direction in footnote 3. So I would ask the authors to clarify the intended causal connections between $C$, $Y^R$, and $Y_t$.\", \"Again on the proof, it looks like the assumption that $Y^R$ is a refinement for all tasks in $Y$ is not even needed?\", \"Can you phrase the concept of \\\"refinements\\\" in terms of causality? What does it mean for task $Y^R$ to be a refinement of $Y_t$?. Although you state that you use the \\\"causal concept of refinement\\\", a causal explanation is not given as far as I can see.\", \"You change the definition of the $p^{do(a) }$ to one with two interventions $p^{do(a_{ik})}$. This suddenly appears below Eq 2. and is then used in what follows. To me, this looks like one of the most important contributions. Clearly, it does not follow from the causal graph or so directly. However, it is one way of defining the regularizer that is consistent with the framework and it seems to be important. So please improve the presentation and introduce it properly.\", \"How would you place downstream tasks like object detection or segmentation in your framework? I would imagine that learning for instance discrimination does not keep all the information necessary to solve these kinds of tasks.\", \"## Suggestions and Comments\", \"I like that you make the modeling assumptions explicit\", \"Get rid of most of the causality jargon, and do not try to oversell your paper into the hyped field. Your paper is not doing causal inference or anything of the kind.\", \"Spend much more time on the regularizer and its definition with the 4 interventions that just fall from the sky. This is actually your contribution: defining the regularizer in the way you do it. A comparison to a simple baseline with just the X-entropy between two interventions would be good (or is that one of the baseline methods?).\", \"your style factors are also called nuisance factors in the literature\", \"consider making it more prominent in the paper that you solve the instance discrimination task\", \"the causal graph presented (in figure 1a: the meaning of arrows is not clearly defined)\", \"if the arrows are causal than it seems to contradict the refinement idea\", \"Fig 1b: make the 4 interventions more explicitly visible. Add the Bear picture to the top and bottom row\", \"also, it would be beneficial to see the experiments that support and visualize theorem 1 (e.g. to show that the KL for other tasks is also decreasing when we decrease the KL for instance discrimination task).\", \"Typos: signal -> signals (line 2 abstract),\", \"Theorem 1: probl also a quantifier for $t$ is needed\", \"Page 5: \\\"and so the left hand side of 4\\\" <- this should probably be \\\"the right hand side of 4\\\"\"], \"overall\": \"the paper has great results and can be a valuable contribution, but it has problems in the formulation and is overselling on being causal.\", \"update\": \"Thanks for fixing the statement of the assumptions such that Theorem 1 can hold. Update 4 -> 5. Some of my concerns are still not addressed in the revised version.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting ideas, good problem formulation and results\", \"review\": \"This paper proposes a framework for self-supervised representation learning using causality. The proposed model is formulated by assuming that the Data generation schema is composed of two independent mechanisms (ie., Style and Content) and only content is relevant for learning the underlying task. Thus, the Content is a good representation of the data and the goal of representation learning could be cast as a content estimation. Then, the authors use interventions on the Style (i.e., data augmentation in their formulation) to learn invariant representation under data augmentation (Style variable). To achieve this invariant prediction they propose a new constructive objective (ReLIC).\\n\\nThe paper is well written and easy to follow. My only concern is that the whole proposal relies on the assumption that Data generation is composed of two independent mechanisms (S and C) and the authors utilize various data augmentations as interventions on the Style variable S as they don\\u2019t have access to S. However, no details are given for the impact of the used data augmentation techniques on the learning better representations.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"New perspective on self-supervised learning\", \"review\": \"In this paper, the authors propose a new understanding of self-supervised learning from a causal perspective. Specifically, a causal graph with style and content is assumed for the generating process of the inputs, such as images. Another assumption is that the down-stream tasks only rely on the content variable. By making use of the independent causal mechanism, the authors propose a new invariance regularization term, which is achieves good performance on several real datasets. Also, a new understanding of contrastive learning is provided.\\n\\nStrength\\n\\nThe understanding of self-supervised learning from a causal perspective is novel. The causal generative model assumed in this paper seems to be reasonable in many real scenarios. Especially, the image data were generated from content and style and usually the downstream tasks such as object recognition depends on the content.\\n\\nThe experimental results on Imagenet and Atari demonstrate the effectiveness of the method.\\n\\nWeakness\\n\\nThe idea of independence mechanisms was originally proposed in [1] and has deep connections to the modularity of a causal system and the concept of exogeneity in economics (Pearl, 2009). In specific, given two variables C and E, we say C is exogenous if P(E|C) remains invariant to changes in the process that generates C. In [1], the independence mechanism is defined as follows:\\n \\u201cWe finally assume that the mechanism is \\u201cindependent\\u201d of the distribution of the cause in the sense that P(E|C) contains no information about P(C) and vice versa; in particular, if P(E|C) changes at some point in time, there is no reason to believe that P(C) changes at the same time.\\u201d When P(c) and P(E|C) both change, they change independently of each other [2]. It would be better if the authors could explore the literature a little bit more and add corresponding discussions. \\n[1] Sch\\u00f6lkopf, Bernhard, et al. \\\"On causal and anticausal learning.\\\" Proceedings of the 29th International Coference on International Conference on Machine Learning. 2012.\\n[2] Huang, Biwei, et al. \\\"Causal discovery from heterogeneous/nonstationary data.\\\" Journal of Machine Learning Research 21.89 (2020): 1-53.\\n\\nThe proposed invariance regularization is closely related to the consistency regularization [3] in semi-supervised learning. The relation and difference to consistency regularization needs to be discussed.\\n[3] Sajjadi, Mehdi, Mehran Javanmardi, and Tolga Tasdizen. \\\"Regularization with stochastic transformations and perturbations for deep semi-supervised learning.\\\" Advances in neural information processing systems. 2016.\\n\\nThe refinement seems counterintuitive to me. The authors define a refinement of one problem as another more fine-grained problem. For a downstream task, when could it be easier to get the constructed labels for a refinement than obtain the labels of the downstream task?\\n\\nThe independence of content and style seems to be a strong assumption. In computer vision, one extensively studied problem is how to make use of context, e.g., background to help object recognition. For example, a monitor will have a high probability to stay on a desk. A car has a high probability to be on the road. Could the authors give some scenarios where the context can be independent of content?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interpretation is nice, but seems to oversimplify the problem.\", \"review\": \"## General Summary:\\n\\nThe authors propose a causal interpretation of the self-supervised representation learning problem. They introduce a method (ReLIC) that employs invariance constraints on the proxy objectives to enforce better generalization. \\n\\nThe invariance is enforced through an additional constraint across interventions on the style generating factor, more concretely, these interventions manifest themselves in form of simple data augmentation strategies (rotation, translation, scaling, etc).\\n\\nThe authors interpret contrastive learning through the concept of refinements.\\n\\nTo the extent of my knowledge, all relevant related work has been mentioned (Invariant Causal Prediction, Invariant Risk Minimization). It would be nice if the authors would make the separation in the main text more clear, the main difference being the regularization and the way they choose the interventions.\\n\\n## Overall Recommendation\\n\\nI find the method proposed in the paper novel and well explained and put in the causal framework. Overall, I think it's an accept but to be a stronger one the authors should address the points in the comments.\\n\\n## Writeup\\n\\nThe paper is clearly written, with a few improvements that can be made (a bit of notation and some typos, unfinished sentences).\\n\\n\\n## Pros\\nThe causal framework is well-motivated, i like the separation in content and style factors. The interpretation of self-supervised learning as invariant prediction with refinements is valid. The results seem significant, since ReLIC outperforms other approaches in an RL (Atari) and classification setting (ImageNet).\\n\\nI like the motivating sentence in the paper that real-world meta-data is abundant and can be used to construct refinements more efficiently, which speaks for the relevance of the contribution.\\n\\nTo the extent of my knowledge the contribution is novel, I am not aware of any other work that connected contrastive learning with the causal framework.\\n\\n## Cons\\n\\nThe problem of choosing the intervention on style factors is a bit understated, the interventions mentioned need not be content-preserving.\\n\\nI find the evaluation on the Atari benchmark a bit misguided, although the method shows superior performance in some environments, RL is exactly the setting where it is difficult to make good interventions for better generalization, i.e. it is difficult to determine in a data-driven way what separates the content from the style. \\n\\n## Comments\\n\\nIn Section 2 would be useful to formally define $Y_t$ as a set of labels. I am not sure about the multi-environment setup, since the distribution p(X) changes also in switching environments (domain shift), which would require different formalism.\\n\\np.4 par.1 where you separate style from content. Shouldn't style be also what causes the fine-grained instance separation. For example, if I have the color of eyes of dogs. In the instance classification task, this would be part of content vs. in the cats and dogs task, this would be part of style. Which would imply that the causal graph in fig. 1 should also look differently, since color of eyes doesn't cause a dog to be a dog.\\n\\np.4 ReLIC objective. I realize the shorthand notation, but the outer expectation should be over $x ~ p(X)$?\\n\\np.4 par. 3 it feels a bit off to me to name content C a representation, whereby it's a latent causal factor that we do not observe. When talking abut representations, we mostly mean f(X)? But sure, if we would use C, we would get an invariant predictor by definition of the causal graph.\\n\\np.4. last paragraph - fine-grained problems that->than $Y_t$\\n\\np.4. last paragraph - at this point, it is very tricky to state what is a content-preserving data augmentation, this is very much tailored to the problem at hand. A very basic example, random cropping is not content preserving if it doesn't show the dog. This is going back to my critique of the work that the problem of choosing intervention is oversimplified. But I understand, given that we know how to do interventions on the style variable, we will be able to extract an invariant predictor.\\n\\np.6 par. 2 Unfinished sentence Unlike...\\n\\np. 7. I think that the experiment on ImageNet shows exactly the point that it is necessary to find good interventions, since the methods employing strong augmentation outperform ReLIC. It would be interesting to see what is the performance of ReLIC with the same set of strong augmentations.\\n\\np. 7 table 2, why aren't the strong augmentation baselines mentioned in this table? I am not familiar with the type of augmentations that were done in those baselines.\\n\\np. 8 reinforcement learning evaluation. Again, I think this is a bit misguided. It is clear that in the general sense we need invariant representations for reinforcement learning, but choosing the right interventions is difficult.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
TwkEGci1Y- | On the Role of Pre-training for Meta Few-Shot Learning | [
"Chia-You Chen",
"Hsuan-Tien Lin",
"Gang Niu",
"Masashi Sugiyama"
] | Few-shot learning aims to classify unknown classes of examples with a few new examples per class. There are two key routes for few-shot learning. One is to (pre-)train a classifier with examples from known classes, and then transfer the pre-trained classifier to unknown classes using the new examples. The other, called meta few-shot learning, is to couple pre-training with episodic training, which contains episodes of few-shot learning tasks simulated from the known classes. Pre-training is known to play a crucial role for the transfer route, but the role of pre-training for the episodic route is less clear. In this work, we study the role of pre-training for the episodic route. We find that pre-training serves a major role of disentangling representations of known classes, which makes the resulting learning tasks easier for episodic training. The finding allows us to shift the huge simulation burden of episodic learning to a simpler pre-training stage. We justify such a benefit of shift by designing a new disentanglement-based pre-training model, which helps episodic learning achieve competitive performance more efficiently. | [
"Meta-Learning",
"Episodic Training",
"Pre-training",
"Disentanglement"
] | Reject | https://openreview.net/pdf?id=TwkEGci1Y- | https://openreview.net/forum?id=TwkEGci1Y- | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"s1p3aWRkhst",
"t04Q6lueCUt",
"1lbfco9UIL",
"p1yz5JpsH3Z",
"MRSCzo5gcTW"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040407233,
1604024810740,
1603935363814,
1603870024096,
1602791717075
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3649/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3649/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3649/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3649/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"There is value in analyzing pre-training for few-shot learning, and the observation that improved disentanglement might lead to better initialization schemes for few-shot learners is worth exploring. However, in its current state, the reviewers do not think the paper is ready for publication. Specifically, work needs to be done to improve the clarity, comparison to related work, and experimental analysis.\"}",
"{\"title\": \"Neat method, a few questions\", \"review\": \"This paper studies how a regularization loss that measures the distance between feature $x$ and learnable class embedding $W_y$ helps meta-learning. The authors show that the SNN loss is lower when the regularization loss is in use in the meta-training phase, which supports the authors' conjecture that this regularization helps disentanglement of the penultimate layer in the backbone network, thus enhancing episodic learning.\", \"pros\": \"1. The method is very neat and easy to implement\\n2. The experiment results are mostly in favor of the authors' claims\", \"questions\": \"1. Not sure if the term \\\"pre-training\\\" is proper here. The main focus of this paper, is the regularization term $l_{reg}$. Why don't tell the audience about this in a more straightforward way? \\n2. Should it be $N$-way $K$-shot, why $1 \\\\leq y_i\\\\leq \\\\mathbf{K}$? I believe there's a typo. \\n3. In Figure 1, all of the three lines are of some sort of V-shaped, i.e. they grew up a bit after reaching minima in the middle. Do you think might be the reason for this phenomenon?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"#### Summary\\n\\nThe submission attempts to understand the role of episodic fine-tuning in a few-shot classification context.\\n\\nUsing Prototypical Networks as a case study, the authors measure the entanglement of class representations (using a soft nearest neighbour measure as was done by Frosst et al. (2019) in the supervised learning case) during episodic fine-tuning for several backbone architectures fine-tuned on 5-way 5-shot mini-ImageNet episodes. Based on these observations, the paper concludes that episodic fine-tuning tends to decrease entanglement in the penultimate layer and proposes to add a regularization term to the supervised pre-training phase which encourages disentanglement in that layer.\\n\\nResults are presented on the mini-ImageNet benchmark and the proposed approach is claimed to perform on-par with or better than competing approaches. Accuracy curves are also shown to support the claim that the proposed approach requires training on less episodes.\\n\\n#### Strengths and weaknesses\\n\\n* **+** The paper\\u2019s topic is very relevant to questions being raised in the recent literature regarding the differences between supervised and episodic training.\\n* **+** The premise of studying the role of episodic fine-tuning from the perspective of feature (dis)entanglement is an interesting application of Frosst et al. (2019)\\u2019s work to the few-shot classification setting.\\n* **-** Vague or inconsistent use of terminology.\\n* **-** Poor writing and presentation.\\n* **-** Performance of the proposed approach is not competitive with competing approaches, contrary to what's claimed in the submission.\\n\\n#### Recommendation\\n\\nI recommend rejection. While the paper\\u2019s premise is interesting, the submission suffers from poor writing and presentation quality, and I\\u2019m not entirely convinced by the claimed causal relationship between representation (dis)entanglement and episodic fine-tuning.\\n\\n#### Detailed justification\\n\\nMy main concerns have to do with the presentation of the results and the interpretation of episodic fine-tuning as a way to decrease the representation\\u2019s entanglement.\\n\\nThe list of competing approaches in Table 1 is incomplete and outdated. For instance, Tian et al. (2020)\\u2019s \\\"Rethinking Few-Shot Image Classification: a Good Embedding Is All You Need?\\\" obtains around 64.8% on mini-ImageNet 5-way 1-shot using a ResNet-12 architecture. Tian et al. (2020) also lists several approaches using a Conv4 backbone that achieve a mini-ImageNet 5-way 1-shot performance greater than 50.4%. I therefore disagree with the assertion that \\\"our method shows competitive performance to other methods\\\" and that \\\"when the backbone is shallow, we have outperformed all other methods with the same backbone\\\".\\n\\nThe paper hints at the fact that the Euclidean metric used by Prototypical Networks doesn\\u2019t work well with highly entangled representations. Another possible interpretation is that the backbone is pre-trained using a linear output layer, which computes the inner-product between the representation and class weight vectors, and that the squared distance computed by the Euclidean metric is not well suited to the resulting representation. According to that interpretation, part of what episodic fine-tuning does is correct for this mismatch in metrics between (meta-)training and (meta-)testing. I think the paper\\u2019s interpretation would be more convincing if the pre-trained backbone used a quadratic output layer (i.e. the logits are computed as the negative squared distance between the representations and class weight vectors) and therefore controlled for metric mismatch.\", \"i_have_additional_issues_with_the_way_in_which_results_are_presented\": [\"Section 3.3 mentions a result presented in Section 4.4, then Section 3.4 begins by stating that the previous section concludes that the last layer in the backbone is more disentangled after episodic training. This means that in order to have the proper context to understand Section 3.4, the reader needs to jump forward and read Section 4.4. I recommend changing the order of the presentation so that it is more linear.\", \"In Table 1, 95% confidence intervals are provided, but the absence of identification of the best-performing approach(es) in each setting makes it hard to draw high-level conclusions at a glance. I would suggest bolding the best accuracy in each column along with all other entries for which a 95% confidence interval test on the difference between the means is inconclusive in determining that the difference is significant.\", \"Overall, the submission could benefit from another round of careful proofreading. It contains several grammar mistakes which, while they do not significantly compromise clarity, make reading the paper harder than it should be. Examples include:\", \"\\\"[...] and crafted the hard episode to make necessary episodes fewer.\\\"\", \"\\\"Disentanglement is the property whether the data-points [...]\\\"\", \"\\\"Benefited by the understanding, [...]\\\"\"], \"i_also_noted_a_few_false_or_unsupported_statements\": \"* \\\"Due to episodic training, meta-learning methods generalize better than traditional transfer-like methods for the novel classes.\\\" Can the authors expand on this? What do they mean by \\\"generalize\\\"? Do they refer to test classes from the same domain (e.g. mini-ImageNet test classes), or test classes from different domains (i.e. cross-domain generalization)? I don\\u2019t see the statement as generally accepted, especially given the many recent papers that show strong performance with well-tuned transfer learning baselines.\\n* \\\"Soft-Nearest Neighbour loss [...] is proposed by Frosst et al. (2019)\\\" Frosst et al. credits Salakhutdinov and Hinton (2007) for the soft nearest neighbour loss, which itself draws inspiration from Goldberger et al. (2005)\\u2019s Neighbourhood Component Analysis.\\n* \\\"Though the split is quite naive, the afterward episodic learning shows promising improvement.\\\" Can the authors point to work that provides empirical evidence for this statement?\\n\\nFinally, the submission\\u2019s use of terminology is vague or inconsistent at times:\\n\\n* The categorization of approaches as either \\\"supervised pre-training\\\" or \\\"meta few-shot learning\\\" feels incomplete to me. While some approaches (most recently Meta-Baseline and Meta-Dataset\\u2019s few-shot learners) do perform supervised pre-training followed by episodic fine-tuning, most well-known approaches such as Matching Networks, Prototypical Networks, MAML, etc. do not prescribe a supervised pre-training phase.\\n* The term \\\"meta few-shot learning\\\" is not widely used in the literature and appears to be introduced in this paper as far as I can tell. It\\u2019s defined in the introduction to be the combination of supervised pre-training and episodic fine-tuning, but it\\u2019s also used in Section 3.2 to categorize Prototypical Networks, whose formulation does *not* prescribe a supervised pre-training phase.\\n\\n#### Questions\\n\\n1. What do the authors mean when they state that \\\"[due] to the parallelism property, normal training literature is much faster than episodic training\\\"? Isn\\u2019t the forward propagation through the embedding function in a few-shot learner such as Prototypical Networks just as parallelizable as the forward propagation in a supervised classifier?\\n1. The submission claims that the proposed approach alleviates the burden of training episodically on a large number of episodes. Given the performance of well-tuned supervised baselines, why should we perform episodic fine-tuning? Shouldn\\u2019t it be sufficient to use the pre-trained backbone as-is?\\n1. The submission follows Chen et al. (2020) for the pre-training and episodic fine-tuning procedure but doesn\\u2019t compare against it in Table 1. How come?\\n\\n#### Additional feedback\\n\\n1. This is arguably inconsequential, but the paper motivates few-shot learning using bird classification as an example, stating that \\\"an ornithologist typically can only obtain a few pictures per bird species\\\". I don\\u2019t know if I agree that this is typically the case: looking at the iNaturalist online database, thousands of pictures can be found for a large number of bird species.\\n1. The related work section appears to contain mostly pre-2020 references and does not mention recent work such as (Simple) CNAPs, SUR, CrossTransformers, etc.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"On the Role of Pre-training for Meta Few-Shot Learning\", \"review\": \"Paper summary\\n\\nIn this paper the authors provide a summary of the role of pre-training in meta-learning. They investigate the performance implications of pretraining for a common pretraining method, prototypical networks, and propose an additional regularization loss to improve the generalization of pre-training. They evaluate the proposed method on miniimagenet and cifar100\\n\\n---------------------------------------------------------------------------------------------------------------------------\\nPositives and negatives\\n\\n+ The summary of the pretraining role in meta-learning is accurate, and the proposed idea of using soft-nearest neighbor as a regularization term in pretraining is novel.\\n+ The method is simple to understand and implement.\\n+ The experiments section is well documented.\\n+ I like the algorithm explanation of the method.\\n- The paper should mention that pre-training in meta-learning is not at all a novel idea, many papers from couple of years ago use pretraining to initialize the weights of their convolutional backbone (see for example LEO, which is cited by the authors). Therefore the authors should make it clear that the only novel idea here is the use of Soft nearest neighbor loss as a regularization.\\nGiven the above and if we look at table 1, we can see that the soft nearest neighbor loss does not actually improve the accuracy compared to simple pretraining for larger backbones. Therefore I would consider the significance of this result very small, and therefore further study should be done to see if it helps under different conditions (e.g. different datasets, more limited data regimes, etc).\\n- Given the small size of the datasets under consideration, I am surprised the experiments section is so weak. I would expect to see experiments for more datasets and more meta-learning methods, e.g. maml, or matching networks, or even simple linear fit on the embeddings). For an example of a more thorough set of experiments, refer to https://arxiv.org/abs/1910.01319, which also looks at pretraining in the context of meta-learning but examines many more different backbone architectures, meta-learning algorithms, and datasets.\\n\\n---------------------------------------------------------------------------------------------------------------------------\\nRecommendation\\n\\nAs it stands this paper requires a significant rewrite to meet the standards of this conference. Therefore I recommend this paper be rejected. The following aspects should be improved: the paper should be clear that the novel contribution is the soft-nearest neighbor regularization, and therefore the major focus of the experiments section should be on ablating this loss and testing it extensively under different conditions (e.g. more datasets, different meta-learning methods, low data regime, etc). Under the tested conditions, the proposed soft-nearest neighbor regularization does not seem to help much. The paper also should be thoroughly reviewed and rewritten as it was very hard to follow and had several sentences which are not possible to understand (a few examples below).\\n\\n---------------------------------------------------------------------------------------------------------------------------\\nQuestions\\n\\n* Figure 1, what is the scale of SNN loss? From the y-scale it looks like it varies from 11.5 to 11.44. I have no idea whether this is basically just noise. Would be good to explain either in the caption or figure what is an SNN value for fully disentangled and fully entangled data.\\n* I didn\\u2019t understand what the authors mean by the \\u201cparallelism property\\u201d. Section 2.4. Could you elaborate further on it? The episodes in episodic training are also trained in a batched SDG setting so it would consider them \\u201cparallel\\u201d in a sense.\\n* In the last sentence of 2.4 the authors say \\u201cfor shallow and deep backbones, it increases performance\\u201d, however from table 1 I read that it only increases performance for shallow backbones. Could the authors justify this sentence?\\n---------------------------------------------------------------------------------------------------------------------------\\nFeedback (not related to the score)\\n\\n* Table 1, would be good to have an explanation of what is the hyperparameter C. The only mention of it is in Eq. 5 as the exponent of alpha, so it can be understood as the regularization but it would be nice to explain in both in the main text under eq. 5 as well as in the caption for Table 1.\\n* In general I would recommend the authors write more in the figure captions. For example figure 3 and figure 4 are very hard to understand, and a few sentences explaining the point of the figure would really help the reader.\\n* To improve the strength of the paper, I would suggest focusing on the disentanglement properties of the SNN regularization and, as proposed above, explore it under a lot more settings. In addition, I would look at how this SNN regularization affects domain transfer as I suspect it could actually be more useful in that case (e.g. pretrain on miniimagenet, test on CIFAR or even MNIST)\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Studies an interesting problem, but lacks an important comparison with previous work and is experimentally weak.\", \"review\": \"Summary\\n========\\nThis paper investigates the role of pre-training as an initialization for meta-learning for few-shot classification. In particular, they look at the extent to which the pre-trained representations are disentangled with respect to the class labels. They hypothesize that this disentanglement property of those representations is responsible for their utility as the starting point for meta-learning. Motivated by this, they design a regularizer to be used during the pre-training phase to encourage this disentanglement to be even more prominent with the hope that this pre-trained solution is now closer to the optimal one, thus requiring less additional episodic training which is time-consuming. They show experimentally that their modified pre-training phase sometimes leads to better results as an initialization for Prototypical Networks compared to the standard pre-trained solution, and sometimes converges faster.\\n\\nPros\\n====\\nThe topic of study of this paper is very interesting. I definitely agree that the role of each of the pre-training and meta-learning phases are not yet well-understood, and making progress on understanding this will shed light on the most promising directions for few-shot classification.\\n\\nAlso, using the Soft-Nearest-Neighbor-Loss is an interesting property to measure (and to try and reinforce) in the pre-trained representations.\\n\\nCons\\n====\\n[A] The biggest weakness of this work, in my opinion, is the lack of connection with previous work that is very similar. Specifically, the property that is referred to in this paper as \\u2018disentanglement\\u2019 is very related to previous notions that have been studied in this context of pre-training for few-shot learning. Specifically, [2] argued that the success of the cosine classifier during pre-training (compared to a standard classifier) is due to explicitly minimizing the intra-class variance of each class (which leads to better clustering, and to better \\u201c\\u2018disentanglement\\u201d in the way that the term is used in this paper).\\n\\nPushing that direction further, [1] proposed regularizers whose purpose is to directly encourage the pre-training phase to have this property of better clustering: minimizing the intra-class variance and maximizing the inter-class variance. This paper should be discussed as related work and should be compared to experimentally since their approach is very similar to the one proposed here.\\n\\n[B] Unfortunately I also found the writing to be of poor quality. Some minor grammatical or wording errors did not distract me too much from understanding the intended meaning, but there were certain statements which I found hard to understand, or disagreed with. Some examples are below:\\n\\n\\u201cDue to episodic training, meta-learning methods generalize better than traditional transfer-like methods for the novel classes\\u201d. The jury is actually still out on this, so I don\\u2019t think it\\u2019s appropriate to make this claim. Better generalization was the motivation of episodic models, indeed, but in practice non-episodic approaches have been shown to perform quite well, as pointed out in this paper too.\\n\\n\\u201cEpisodic sampling is time-consuming\\u201d. Can you explain why that is? I don\\u2019t disagree (based on my experience too) but I don\\u2019t believe it is obvious, and it would be useful to explain this.\\n\\n\\u201cIn the previous section, we conclude that the last layer in the backbone would be more disentangled after episodic training\\u201d. It\\u2019s unclear to me how that conclusion follows.\\n\\n[C] Another weakness is that the proposed method does not perform too strongly compared to the baselines / previous methods. It seems that the gain is larger for smaller architectures, which is in line with the observation in [2] that minimizing intra-task variance is most beneficial for small backbones. On the other hand, for larger architectures, RP-Proto is not better than plain proto (notice the overlap in the confidence intervals in the respective entries of Table 1). \\n\\n[D] I also found the experiments to be weak in terms of the analysis of disentanglement during pre-training and episodic training. Contrary to the authors\\u2019 observation, it doesn\\u2019t really seem to me that the disentanglement loss is going down too much during episodic training by looking at Figure 1. The conv and resnet18 curves are mostly flat. The resnet10 one does go down noticeably but then starts going up again. I\\u2019m also not sure what causes the large discrepancy in the behavior of the resnet10 curve compared to the other two? My initial thought was network capacity, but resnet 10\\u2019s capacity is in between that of the other two networks if I understand correctly, so it\\u2019s hard to draw a conclusion there.\\n\\nOverall\\n======\\nI vote for rejection of this paper in its current form, mostly due to the missing comparison with the very related method mentioned above, the quality of the writing and the weakness of the experimental results, as described above.\\n\\nSuggestion for additional experiments\\n============================\\nTable 1 shows that MetaOptNet outperforms RP-Proto (I\\u2019m looking at the entry of RP-Proto with the ResNet12 backbone for an apples-to-apples comparison with MetaOptNet). I would be curious to see an RP-MetaOptNet variant too. More generally, does the proposed regularizer also lead to improvements in episodic approaches that are closer to state-of-the-art compared to Prototypical Networks?\\n\\nFurther, as an additional data point, it would be useful to also report the performance of the pre-trained network itself on the few-shot test tasks (without a meta-learning phase at all). For an apples-to-apples comparison with the reported Prototypical Network variants, Prototypical Networks can be used to solve each test task still, but operating directly on top of the representation learned from pre-training, instead of the representation produced by the episodic phase.\\n\\nAdditional comments for fixing minor issues and improving clarity\\n===============================================\\nBelow are a few more recommendations and singled-out sentences that I think should be re-written to improve clarity.\\n\\nIn the \\u201cMixed Framework\\u201d section, [3] should also be cited among the papers that used a pre-trained solution as the initialization for the meta-learning stage as this is how the meta-learners in that paper were trained as well.\\n\\nThe provided reference for fo-MAML is incorrect. fo-MAML was actually introduced in the original MAML paper (Finn et al, 2017). The provided (Nichol et al., 2018) reference introduced Reptile, which is similar but not the same as fo-MAML.\\n\\n\\u201cEpisodic training is key to make meta-learning prominent\\u201d. I\\u2019m not sure what this sentence means. Are there other ways of meta-learning in this context without episodic training?\", \"the_description_of_optimization_based_meta_learning\": \"\\u201c[...] try to get an embedding that could easily fit subtasks by adding some extra layers\\u201d. This is not entirely accurate. MAML, for instance, does not add any extra layers per task. Instead, the entire network is rapidly fine-tuned within each task as well as meta-learned across tasks.\\n\\nReferences\\n=========\\n[1] Unraveling Meta-Learning: Understanding Feature Representations for Few-Shot Tasks. Goldblum et al. ICML 2020.\\n[2] A Closer Look at Few-shot Classification. Chen et al. ICLR 2019.\\n[3] Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples. Triantafillou et al. ICLR 2020.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
D_KeYoqCYC | Sparse encoding for more-interpretable feature-selecting representations in probabilistic matrix factorization | [
"Joshua C Chang",
"Patrick Fletcher",
"Jungmin Han",
"Ted L Chang",
"Shashaank Vattikuti",
"Bart Desmet",
"Ayah Zirikly",
"Carson C Chow"
] | Dimensionality reduction methods for count data are critical to a wide range of applications in medical informatics and other fields where model interpretability is paramount. For such data, hierarchical Poisson matrix factorization (HPF) and other sparse probabilistic non-negative matrix factorization (NMF) methods are considered to be interpretable generative models. They consist of sparse transformations for decoding their learned representations into predictions. However, sparsity in representation decoding does not necessarily imply sparsity in the encoding of representations from the original data features. HPF is often incorrectly interpreted in the literature as if it possesses encoder sparsity. The distinction between decoder sparsity and encoder sparsity is subtle but important. Due to the lack of encoder sparsity, HPF does not possess the column-clustering property of classical NMF -- the factor loading matrix does not sufficiently define how each factor is formed from the original features. We address this deficiency by self-consistently enforcing encoder sparsity, using a generalized additive model (GAM), thereby allowing one to relate each representation coordinate to a subset of the original data features. In doing so, the method also gains the ability to perform feature selection. We demonstrate our method on simulated data and give an example of how encoder sparsity is of practical use in a concrete application of representing inpatient comorbidities in Medicare patients. | [
"poisson matrix factorization",
"generalized additive model",
"probabilistic matrix factorization",
"bayesian",
"sparse coding",
"interpretability",
"factor analysis"
] | Accept (Poster) | https://openreview.net/pdf?id=D_KeYoqCYC | https://openreview.net/forum?id=D_KeYoqCYC | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"kcevaEmBwWG",
"XI2F1oskGfd",
"1chkg5OMab",
"6qNrmR6jPiv",
"O2muHUmLvXM",
"GN5iobEDmFG",
"1KqCEzExL2",
"uOsT8HCF_Dm",
"4xKPbC4-idj",
"QzKaSsLUwpq",
"hCeryvKtqNg",
"W6vsKKgMKgw",
"bAOUlLaVN4g",
"00pJq1UkJcX",
"5PZ5nE84etL",
"PCM4HTQSqUC",
"kZH2Siy_r6-",
"tz4cG9Mcmtg"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040379313,
1606272763790,
1606116431969,
1605987289681,
1605971815071,
1605971259577,
1605970435566,
1605968970012,
1605965649265,
1605912619058,
1605912002364,
1605910884778,
1605910443162,
1605910358128,
1605909576806,
1604580471712,
1603993930153,
1603199493256
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3647/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3647/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3647/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3647/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3647/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3647/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3647/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3647/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3647/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3647/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3647/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3647/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3647/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3647/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3647/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3647/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3647/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The authors present a hierarchical factorization of the Poisson matrix and explain why sparcity in the encoder is important for interpretability. The reviewers appreciated the contribution of the paper and highlighted the advantage of such an approach for users. The authors have improved their initial version by adding more detail on inferences and experiments. The decision is to accept the paper.\"}",
"{\"title\": \"Summary of major changes\", \"comment\": [\"We greatly enjoyed this open peer review experience and would like to thank the reviewers for helping us to improve our manuscript. We have touched on each of the individual changes in various comments, however, here is a summary of the major changes that we have made to the manuscript:\", \"More information on inference\", \"Direct comparison to standard HPF, on the synthetic datasets (Fig 3)\", \"A new comorbidity factorization figure (Fig 4)\", \"More-detailed exposition of interpretation within the comorbidity example\", \"Less-rushed exposition of the main model equation (Eq 2)\", \"Emphasis on how we are using the encoder as a proxy for subsequent Bayesian inferences (Eq 3)\", \"Better motivation behind the choices of priors, noting rationale behind hyperparameter presets used in the manuscript.\", \"Training runs for generating our synthetic results are now given in a notebook in the Supplemental Materials\", \"We hope that these changes will be satisfactory. Thanks!\"]}",
"{\"title\": \"Strong paper in development\", \"comment\": \"Dear authors,\\n\\nI am responding for all of your separate comments here, just to keep things in one place. Overall, I am delighted for the very detailed responses you are providing, and all of your responses are reasonable.\\n\\nRegarding inference, I fully subscribe to the probabilistic programming ideal of focusing on model specification and not on inference details (unless writing a paper on inference as such), but my main concern was about whether your empirical experiments are sufficiently transparent for the reader and can be relied on. For this is it is important to provide sufficient details both on what you did and how well it worked. The new Inference section motivates the choices much better and is satisfactory, and the WAIC-based comparison of the link functions shows that the claims on evaluation methods that could be used were real.\\n\\nRegarding the model and its intuition, I still think that the idea is nice and clearly worth publishing. My main concern here was that of a bit lost opportunity to describe the connections in a way that maximally many readers understand what you are aiming at. The revised version is again better, but the presentation remains quite compact (naturally also due to space constraints) and I am not sure how much it helps readers who are already not thinking in these terms. \\n\\nOverall, I think that the revised paper has improved and I am still leaning towards acceptance, but the paper still feels a bit rushed in presentation. I am increasing my score by one point to account for the improved discussion of the missing details.\"}",
"{\"title\": \"Re: Use the review process as you wish\", \"comment\": \"Thanks for the encouragement.\\n\\nWe have posted a revised version of our manuscript to address the critiques mentioned. In particular, we added more exposition around the interpretation of the real-data example. We also added a direct comparison to HPF. Finally, our new version incorporates additional background behind inference.\"}",
"{\"title\": \"Use the review process as you wish\", \"comment\": \"I understand that you are discovering the review discussion. Use it as you wish. Do not feeling pressurized to do it one way or the other. I for one, I'm simply genuinely interested in understanding better your view, the points that you are trying to put forward, and the evidence that you are bringing. Better exposition makes strong manuscripts.\"}",
"{\"title\": \"A metaphor\", \"comment\": \"Additionally, I thought I would comment on how our misinterpretation of the review process is a metaphor for how matrix factorization methods are misinterpreted. The review process resembles that of an ordinary journal in many ways. For this reason, we had the apriori bias that it would proceed along the same dynamics -- a deadline for us to dump a comprehensive set of revisions and a detailed rebuttal. Instead, if we put aside our biases and read the instructions in more detail, we would have seen that there is to be interaction between us and the reviewers.\\n\\nSo, we made fundamentally the same mistake that people make in misinterpreting existing factorization methods. All matrix factorization methods bear strong resemblance to PCA. In PCA, the inferred loading matrix is orthogonal. For this reason, the transpose of this matrix provides an inverse transformation - one never needs to think about whether a given transformation is data -> representation or representation -> prediction. Given one, the other is implied. However, the central message on our manuscript is that this is not the case for factorization methods in general.\"}",
"{\"title\": \"Re: Emphasis on interpretability, but the improvement on this aspect is not well demonstated\", \"comment\": \"We thank the reviewer for their concern. While we work on improving the exposition behind the two main examples we give in the text (Synthetic data + comorbidity), so that our point on interpretability is clear, we wanted to comment on the hyperparameters.\\n\\nWhile our method has the appearance of having many hyperparameters, we have set it up so that this is not the case. For example, some the parameters $\\\\eta_i$ and $\\\\xi_u$ are scaling parameters that are computed directly from the data. The reason we do this is so that we can set our priors in the model so that they generalize without much in the way of hyperparameter optimization, so that the priors are weakly informative in that they regularize the problem but do not influence the solution much when ample data exists.\\n\\nThe only real hyperparameters in our model control the scaling of the horseshoe distributions, though in our presentation of the model we have preset the values for these parameters (see Eq 3), which will be Eq 4 in our revision. In the horseshoe prior, the scaling variables control the expectation for the amount of apriori sparsity in the solution. In Piironen and Vehtari (2017), they show that the expected number of nonzero components scales to the square root of the size of the parameter vector. **We note that this is just what is apriori expected, and there is actually a wide range that is admissible.** Based on their analysis, we pre-set the scale of the Horseshoe on the variables u to $1/\\\\sqrt{UI}$, so that we have invariance to problem size. All of this is theoretical so far.\\n\\nWhat we didn't show in the main text was that we tuned the method to different combinations of $U$ and $I$ to exhibit the behavior seen in analysis of synthetic data in Fig 2. The behaviors we wanted were\\n\\n1. Random unstructured noise variables are excluded\\n2. Covarying variables are included\\n\\nOur presets for all of the potential hyperparameters exhibited the desired behavior robustly across many choices of $U$ and $I$, from small to large. In our revision we'll comment a bit more about this. In essence, we have set it up so that there are zero hyperparameters in our model, though one might want to make some modification for less or more sparsity as desired, by multiplying the $1/\\\\sqrt{UI}$ term by a constant.\"}",
"{\"title\": \"Our new description of inference\", \"comment\": \"We haven't yet posted the revision, while we are making other edits, but here is the inference scheme:\\n\\nThe model of Eq. 2 is a generalized linear factor model that we have mathematically related to probabilistic autoencoder. When augmenting HPF with explicit encoder inference, as we have done, one obtains a probabilistic autoencoder. This fact suggests that other work in the literature can serve as a guide for training, especially work done on using the horseshoe prior in Bayesian neural networks (Ghosh & Doshi-Velez, 2017a; Ghosh et al., 2018; Louizos et al., 2017).\\n\\nIn particular, Ghosh et al. (2018) investigated structured variational approximations of inference of Bayesian neural networks that use the horseshoe prior and found them to have similar predictive power as mean-field variational approximations. The disadvantage of structured approximations is the extra computational cost of inferring covariance matrices. For these reasons, we focus on mean-field black-box variational inference, using Ghosh et al. (2018) as a guide, noting consistency of their scheme with other works that have investigated variational inference on problems using the horseshoe prior (Wand et al., 2011; Louizos et al., 2017).\\n\\nAs in Ghosh & Doshi-Velez (2017a); Ghosh et al. (2018); Chang et al. (2019), for numerical stability, we reparameterize the Cauchy distributions in terms of the auxiliary inverse Gamma representation (Makalic & Schmidt, 2016),\\n\\n(Equation)\\n\\nWe perform approximate Bayesian inference using fully-factorized mean-field Automatic Differenti-ation Variational Inference (ADVI) (Kucukelbir et al., 2017). For all matrix elements, we utilizedsoftplus-transformed Gaussians, and coupled these to inverse-Gamma distributions to for scale param-eters, as investigated in Wand et al. (2011).\"}",
"{\"title\": \"Emphasis on interpretability, but the improvement on this aspect is not well demonstated\", \"comment\": \"I thank the author for their reply.\\n\\nI understand that the main benefit may be on interpretability, but this benefit is not demonstrated in a convincing way. Theoretical arguments, such as the need for sparsity, are not enough, as the end judge of interpretability is a human. Reading the authors' reply, I get the impression that the contributed model is more stable, or easier to tune, with regards to \\\"details\\\" such as hyper parameters. If this is the case, it should be demonstrated. The authors claim that shortcomings in interpretability of the HPF model is what motivated them to contribute a new method. They should show the reader what they saw, to back this motivation.\"}",
"{\"title\": \"R1 - inference\", \"comment\": \"- **My biggest issue with the paper concerns inference. The description is limited, only referring to a specific algorithm (ADVI) without specifying all details (mean-field vs full-rank approximation), and there is no discussion or analysis on how well it works.**\\n\\nAs a Bayesian hierarchical model, there are many methods to perform inference. Because we do not claim to be using the best method, and were short on space, we did not devote much space to our exact method. Additionally, we did not want to detract from the main message of sparse encoding.\\n\\nIn our revision in preparation, we are providing more transparency as to the details of the scheme. In summary, we are relying on published literature for providing backing behind our inference scheme. In particular, we are adapting the mean-field variational black box scheme used in literature on horseshoe Bayesian autoencoders (Ghosh 2019, and newly cited Louizos 2020). In our manuscript, we provided Eq. 8 (will be Eq 9 in revision) as a specific reparameterization for improving computational stability, as done in the literature.\\n\\n- **The authors do say that the algorithm converged fast, but do not show this in any experiment. More importantly, the convergence does not yet guarantee the approximation is good and there are well-known cases for which ADVI does not really work that well. Expanding both the discussion and empirical demonstration of this would be critical. Now you say \\\"one may use...\\\" and \\\"one can access...\\\" with references to specific techniques for evaluating the quality, which gives the impression you have not actually done that.**\\n\\nThe cited manuscripts Ghosh 2019 and Louizos 2019 discuss inference and we would like to maintain a focus on interpretability. However, in the Supplemental Materials, we will now provide notebooks showing inference performed for the synthetic data examples. Actually, our implementation is already public on github, along with links to Colab notebooks that reproduce Fig. 2. We will not link to our repository yet to preserve anonymity. Post-deanonymization, we will directly link to empirical results. For now, please refer to our revised Supplement that we will post sometime this weekend.\\n\\n- **As I presume you implemented the model in Stan (no point in using ADVI if not; there are better stochastic VI methods around that are also easier to implement), would you be able to compare the inference results against HMC at least in some small-scale problem?**\\n\\nIn actuality we implemented the method in Tensorflow-Probability (TFP), due to the extra flexibility that it affords. As we allude in the Introduction, we use the enclosed method as a piece of a larger modeling infrastructure that is applied to modeling problems that require flexibility and interpretability. TFP does provide methods for HMC/Nuts, though the implementation is different than Stan. There is some difficulty to us implementing HMC in our code-base however. In essence we designed the entire model source base to use batched data, for stochastic minibatch-based inference on the variational objective. We are attempting to make relevant changes so that we can use HMC, however, this requires extensive changes and we cannot promise that we will have this done by Monday/Tuesday.\\n\\n\\n- **Very limited coverage of inference, which is an important aspect even if carried out by an external software. Both theoretical and empirical evidence is missing, even though methods for evaluating the approximation quality are referred to.**\\n\\nPlease see above.\\n\\n\\n- **Figures 3 and 4 are pretty, but the spherical representation does not seem to add anything here and only results in the plots taking too much space while being slightly more difficult to read. More generally, the comorbidity example is a bit superficial and could have been developed a bit further.**\\n\\nThank you so much for the compliment, we also like the figures. We have been aware that they consume a lot of whitespace. With great sense of remorse, we have redone these figures so that they take up less space now (though they are now aesthetically vanilla). We think this sacrifice is worth it because it is giving us extra space to expand on inference and improve on overall exposition. We also are expanding on the comorbidity example.\\n\\n\\n- **How was ADVI applied? How did you check the approximation is good?**\\n\\nPlease see above.\\n\\n- **Did you apply WAIC/PSIS-LOO or just say that it could be done?**\\n\\nWe have an implementation of WAIC that we use in the Supplemental Materials for comparing the choice of different link functions f/g. We are conceptualizing a follow-up paper that will be more-focused on learning those functions consistently and on how they impact the generating process.\"}",
"{\"title\": \"R1 - Intuition and motivation\", \"comment\": \"- **The model itself is a bit counter-intuitive, explaining a generally desirable property (latent variables following a reasonably chosen prior distribution, free from additional computational constraints) as a limitation and proceeds to replace it with a simplified mapping from inputs, but as the mapping itself is well justified in terms of sparsity the overall construct still makes sense.**\\n\\nWe see constraints in statistical problems as generally desirable as they remove ambiguity and add regularization. However, the intent of our method is to only have minimal impact on the generating process implied by the decoder portion of the model (which corresponds to the standard matrix factorization method). Other than the usage of a updated sparsity model, our generating process is the same that is used in standard HPF.\\n\\nIn all pure matrix factorization methods, some mapping $Y \\\\to \\\\pi(\\\\theta|Y, ...)$ from data to representation exists, however, is not explicitly specified or necessarily well-posed. In other words, for a given decoding, an encoding may or may not exist that is sparse in an useful way -- regardless of decoder sparsity. The constraint removes this ambiguity completely. Furthermore, in the original HPF, a prior is still placed on the representation which is itself encouraged to be sparse. Hence, in terms of a priori restriction on the representation space, we believe that our method is at least equivalent to that of HPF.\\n\\n- **The presentation angle is somewhat narrow and some connections are missed (e.g. the authors do not explain this as amortizing the inference for the latent variables, but explain the model in terms of encoders/decoders), but this is not a major issue.**\\n\\nThank you for the suggestion on how we should highlight the fact that our method amortizes inference for the latent variables. We are making this point more prominent (previously it was mentioned in passing under the caption of Fig 1). We have added the following text to our revision in preparation:\\n\\nThe distributions of the parameters of the encoder are learned self-consistently with other model parameters.\\nIn the process, one is training not only the generative model, but also the subsequent Bayesian inference of mapping data to representation by learning the statistics of the posterior distribution,\\n\\n\\\\begin{equation}\\n\\\\theta_u \\\\vert \\\\mathbf{y}_u \\\\sim \\\\iint \\\\pi(\\\\theta_u \\\\vert \\\\mathbf{B},\\\\varphi, \\\\mathbf{y}_u)d\\\\mathbf{B}d\\\\varphi,\\n\\\\end{equation}\\n\\nwhere the generative process has been marginalized.\\nIn short, the model of Eq. 2 uses the marginal posterior distribution of the encoding matrix $\\\\mathbf{A}$ to reparameterize this Bayesian inference. \\nDoing so makes it easier to apply the model to new data in order to compute representations.\\nIt also allows us to impose desirable constraints on the representations themselves.\\n\\n\\nFrom our perspective, it is the mathematical connection between matrix factorization methods and autoencoders that is key to thinking of our overall method. The generative (decoder) process of a linear autoencoder is exactly a matrix factorization. Hence, by adding an encoding machinery to factorization (representation inference), a matrix factorization model becomes an autoencoder. \\n\\nThe first advantage of thinking in terms of encoder/decoder structures is that principled extensions to the method, while retaining the full intrinsic interpretability, become evident. However, these extensions are not the main message behind our manuscript, and we see how they may be confusing the message, so we have reduced prominence of the GAM aspect in the revision. In a forthcoming work we will expand on theoretical aspects of extension of the the method to non-linearity. As the reviewer noted, \\\"The paper proposes a PMF variant that replaces the free-form representation for latent factors from explicit mapping from inputs, to improve interpretability.\\\" We are devoting more attention to this main message in our revision.\"}",
"{\"title\": \"To R1/R2/R3: Thanks for the careful reviews and apologizes for these late posts\", \"comment\": \"First, thank you for the careful critique of the work. Also, apologies for posting a reply so late in the review period. We are still getting used to this type of review process and did not realize that it is meant to be a more-interactive experience than that is typical of journals. While we are still preparing edits to our manuscript, we thought it would be good to respond to your reviews.\"}",
"{\"title\": \"Response to R2 - Practical value, bigger picture\", \"comment\": \"- **The manuscript is well written: precise and well articulated. It develops well the theoretical point that sparsity in encoding and decoding is important. However, the practical value of the contribution is not strongly demonstrated. In the real-life application, on comorbidity data, the sparsity is demonstrated as expected, but the benefit compared to other approaches is difficult to gauge, whether it is to a non-sparse approach, or an approach based on sparsifying priors. The benefit of the GAM decoding is not clear.**\\n\\nIn our revision, we are expanding on the comorbidity application to better explain why sparsity in the encoder says more than sparsity in the decoder, on a real application. This is important to us because our focus is on highlighting the benefit in terms of interpretability.\\n\\nWe incorporated the GAM to provide a principled way of extending the generative model to nonlinearity. In our Supplement we do some model comparison of such nonlinear models on synthetic data. We wish to expand more on the GAM aspect in a subsequent manuscript, where we will explore how the interplay between f and g influences the generative process.\\n\\n- **While there are good theoretical arguments for the model, they only partly convince in practical terms: a full analytics pipeline has made aspects to it, and the arguments might not be as important as they seem in practice. It could help to perform more empirical comparison, and to study more the contribution in the context of full analyses. The empirical demonstrations show that the model exhibit the properties that it was designed for: sparsity in the decoding. What are practical consequences of these properties in real-life applications?**\\n\\nWe hope that by improving the exposition of the comorbidity application that the consequences of sparse encoding will be more-evident from a first-reading. In short, without sparse encoding, one cannot say that a particular representation coordinate is formed from any given subset of features. When judging interpretability, particularly in high-dimensional problems, sparsity is a desirable property because it allows one to focus on a few of many features at a time as a coherent concept. In our application, we can say that a given coordinate is determined by a specific subset of medical billing codes that co-occur often in the dataset. We know explicitly what the relative importance of each of the billing codes in the subset are on each representation coordinate, by reading entries off the encoding matrix. So for example, in our manuscript, we highlight that there is a particular representation coordinate that pertains broadly to respiratory disorders.\\n\\n- **In the bigger picture, it is unclear to see how this contribution positions itself in terms of practical benefits in the vast literature on latent factors with distangling approaches (distangling autoencoders, various matrix factorizations including NMF with different losses).**\\n\\nWe appreciate this criticism and we are editing our manuscript to make it clear that we are ameliorating a gap between how these pre-existing methods are often interpreted, and what the models actually say. In short, through a modification (an imposition of sparse encoding), one can actually interpret these methods in the natural way that is often done. One can say that a a given set of features coherently loads together into a representation coordinate.\"}",
"{\"title\": \"Response to R2 - On training\", \"comment\": \"We would like to thank you for the interesting perspectives on our paper that we had not fully appreciated. Our focus is on interpretability - in particular addressing a gap between how similar factorization methods are interpreted versus what the models actually say. This gap appears superficially subtle but we believe it has large consequence. For this reason, we would like to refrain from straying too much from this central message. That said, we will address your comments in our revision that we will post this weekend. In the meantime, please see below for our responses.\\n\\n- **The main practical benefit compared to sparsifying priors (as in Gopalan et al. (2014)) is that the contributed method can be solved with automatic differentiation variational inference and hence stochastic solvers, which makes it in theory easier to scale, though this improvement is not demonstrated empirically in the manuscript.**\\n\\nInterestingly, it was issues with the scaling of HPF that lead us to first create our method. For HPF, the posterior distribution of an UxK representation matrix is learned at training. When we initially implemented HPF in custom Tensorflow code, we found that we could not scale it to suit our problem (to fit into GPU memory) where U was on the order of tens of millions and K is sometimes on the order of tens to hundreds.\\n\\nWe also did not see a clean way of performing batch inference on this problem. In standard HPF, the representation matrix is itself a model parameter. In minibatch optimization, one avoids storing the entire dataset in GPU memory. However, one still needs to store the parameters for the model (including the representation for HPF) because they are what is updated in each iteration of inference. In our revision we are expanding a bit on this theoretical discussion. We should note that the package hpfrec, that we use in our comparisons, has implemented a nonstandard minibatch algorithm for standard HPF. \\n\\nRegardless, it soon became clear to us that interpretability of HPF was most lacking. We would like the focus of our manuscript to be on interpretability. Even if HPF is faster, through various tunings and hacks for implementing minibatch training, it doesn't provide what our method provides in terms of interpretability. That said, we will expand more on why having a constrained encoder transform leads to more memory-efficient training.\\n\\nIt is not however an apples to apples comparison when contrasting the computational cost of our method against that of standard HPF. This is mainly because we are using an updated sparsity model whereas HPF uses gamma distributions. For this reason, HPF has the advantage of possessing exact variational updates whereas we use stochastic gradient descent. Additionally, hpfrec, which we use for comparisons, is implemented in CPU code and our implementation is GPU. For these reasons, we hope that the reviewer will be sympathetic to our perspective that a direct benchmark comparison of the two algorithms is of limited use.\"}",
"{\"title\": \"Response to R3\", \"comment\": \"- **For me the weakness of the approach is that the solution proposed appears very ad hoc with no probabilistic basis. Particularly in matrix factorisation problems where there is often no easy way to establish ground truth: this often leads to setting hyperparameters arbitrarily. To be fair the authors point to ways of assessing predictive power which might help in this domain. A less ad hoc model might allow a more principled approach to choosing hyperparameters.**\\n\\nThank you for your critique of the method. We think we can help address your concern by relating our method to other matrix factorization approaches like PCA and SVD. It is true that the unsupervised problem lacks a ground truth. In our view, the objective of methods like PCA and SVD is to find a useful parameterization of the data in fewer dimensions, that retains most of the data's variability. PCA and SVD do so under a Gaussian noise model. HPF and our variant of HPF do so under a Poisson model.\\n\\nThe question then is whether such a parameterization is useful - the answer to this question is problem-dependent and the general solution outside of the scope of our manuscript. However, in our revision we are expanding our exposition of the comorbidity application to better-describe how the output of our method can be used downstream in analysis. In our case, the method is useful because it first reduces the dimensionality of the data and second makes coarse-graining of the data, by grouping like datapoints together, more tractable. All this is done while maintaining interpretability of the overall model in terms of the original data features, at all times.\\n\\nAs to hyperparameters, the parameters within the model can influence the result. However, we have carefully tuned the parameters using standard Bayesian considerations. For instance, the scaling on the background process is set to a somewhat large multiple of the average value for each feature in the dataset. This type of prior is known as a weakly-informative prior in the Bayesian literature. We set the scaling of the regularization so that the variance of the priors is invariant with data size. This setting yielded the results demonstrated in Fig 2 -- in particular we are able to replicate the same results on matrices of different sizes, using the scalings provided in our manuscript. In that respect, there are few hyperparameters that need to be tuned.\"}",
"{\"title\": \"Interesting HPF variant with slightly superficial presentation\", \"review\": \"Summary:\\nThe paper proposes a PMF variant that replaces the free-form representation for latent factors from explicit mapping from inputs, to improve interpretability. A reasonable probabilistic formulation with carefully selected priors is provided, but inference is carried out using generic tools, and the method is illustrated on artificial data and a simple comorbidity application.\", \"reasons_for_score\": \"The proposed model is interesting and well motivated, and the technical details regarding the choice of priors for encouraging sparsity are good and match current recommendations. The model itself is a bit counter-intuitive, explaining a generally desirable property (latent variables following a reasonably chosen prior distribution, free from additional computational constraints) as a limitation and proceeds to replace it with a simplified mapping from inputs, but as the mapping itself is well justified in terms of sparsity the overall construct still makes sense. The presentation angle is somewhat narrow and some connections are missed (e.g. the authors do not explain this as amortizing the inference for the latent variables, but explain the model in terms of encoders/decoders), but this is not a major issue.\\n\\nMy biggest issue with the paper concerns inference. The description is limited, only referring to a specific algorithm (ADVI) without specifying all details (mean-field vs full-rank approximation), and there is no discussion or analysis on how well it works. The authors do say that the algorithm converged fast, but do not show this in any experiment. More importantly, the convergence does not yet guarantee the approximation is good and there are well-known cases for which ADVI does not really work that well. Expanding both the discussion and empirical demonstration of this would be critical. Now you say \\\"one may use...\\\" and \\\"one can access...\\\" with references to specific techniques for evaluating the quality, which gives the impression you have not actually done that. As I presume you implemented the model in Stan (no point in using ADVI if not; there are better stochastic VI methods around that are also easier to implement), would you be able to compare the inference results against HMC at least in some small-scale problem?\", \"pros\": \"1. The idea is insightful and matches well the application needs.\\n2. The priors match current literature on suggestions for sparsity-inducing priors.\", \"cons\": \"1. Very limited coverage of inference, which is an important aspect even if carried out by an external software. Both theoretical and empirical evidence is missing, even though methods for evaluating the approximation quality are referred to.\\n2. Figures 3 and 4 are pretty, but the spherical representation does not seem to add anything here and only results in the plots taking too much space while being slightly more difficult to read. More generally, the comorbidity example is a bit superficial and could have been developed a bit further.\", \"questions\": \"1. How was ADVI applied?\\n2. How did you check the approximation is good? Did you apply WAIC/PSIS-LOO or just say that it could be done?\", \"modifications_after_discussion\": \"Increased score by one since the revised paper clarifies the missing details on inference and also improves the motivation.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good theory, not completely convincing in practice\", \"review\": \"This manuscript revisits the hierarchical Poisson matrix factorization (HPF) promoting sparsity in both the representation and the decoding function with a Horseshoe+ prior. Sparsity on both sides is put forward for interpretation purposes, and to create a column-clustering property. In addition, the proposed approach caters for non-linear decoding using a GAM model.\\n\\nThe main practical benefit compared to sparsifying priors (as in Gopalan et al. (2014)) is that the contributed method can be solved with automatic differentiation variational inference and hence stochastic solvers, which makes it in theory easier to scale, though this improvement is not demonstrated empirically in the manuscript.\", \"the_manuscript_is_well_written\": \"precise and well articulated. It develops well the theoretical point that sparsity in encoding and decoding is important. However, the practical value of the contribution is not strongly demonstrated. In the real-life application, on comorbidity data, the sparsity is demonstrated as expected, but the benefit compared to other approaches is difficult to gauge, whether it is to a non-sparse approach, or an approach based on sparsifying priors. The benefit of the GAM decoding is not clear.\\n\\nWhile there are good theoretical arguments for the model, they only partly convince in practical terms: a full analytics pipeline has made aspects to it, and the arguments might not be as important as they seem in practice. It could help to perform more empirical comparison, and to study more the contribution in the context of full analyses. The empirical demonstrations show that the model exhibit the properties that it was designed for: sparsity in the decoding. What are practical consequences of these properties in real-life applications?\\n\\nIn the bigger picture, it is unclear to see how this contribution positions itself in terms of practical benefits in the vast literature on latent factors with distangling approaches (distangling autoencoders, various matrix factorizations including NMF with different losses).\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Sparse Encodings for counting matrix factorisation.\", \"review\": \"** Description\\n\\nThis paper provides a new approach for finding a sparse encoding of count data matrices and hence automatically achieve feature selection.\\n\\n** Pros\\n\\nThe proposed technique is clearly efficient and practical. It identifies a failing in traditional hierarchical Poisson matrix\\nfactorisation (HPF) and proposes a solution. This approach is tested on real world datasets where its usefulness is demonstrated.\\n\\n** Cons\\n\\nFor me the weakness of the approach is that the solution proposed appears very ad hoc with no probabilistic basis. Particularly in\", \"matrix_factorisation_problems_where_there_is_often_no_easy_way_to_establish_ground_truth\": \"this often leads to setting hyperparameters arbitrarily. To be fair the authors point to ways of assessing predictive power which might help in this domain. A less ad hoc model might allow a more principled approach to choosing hyperparameters.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
QKbS9KXkE_y | Data-efficient Hindsight Off-policy Option Learning | [
"Markus Wulfmeier",
"Dushyant Rao",
"Roland Hafner",
"Thomas Lampe",
"Abbas Abdolmaleki",
"Tim Hertweck",
"Michael Neunert",
"Dhruva Tirumala",
"Noah Yamamoto Siegel",
"Nicolas Heess",
"Martin Riedmiller"
] | Hierarchical approaches for reinforcement learning aim to improve data efficiency and accelerate learning by incorporating different abstractions. We introduce Hindsight Off-policy Options (HO2), an efficient off-policy option learning algorithm, and isolate the impact of action and temporal abstraction in the option framework by comparing flat policies, mixture policies without temporal abstraction, and finally option policies; all with comparable policy optimization. When aiming for data efficiency, we demonstrate the importance of off-policy optimization, as even flat policies trained off-policy can outperform on-policy option methods. In addition, off-policy training and backpropagation through a dynamic programming inference procedure -- through time and through the policy components for every time-step -- enable us to train all components' parameters independently of the data-generating behavior policy. We continue to illustrate challenges in off-policy option learning and the related importance of trust-region constraints. Experimentally, we demonstrate that HO2 outperforms existing option learning methods and that both action and temporal abstraction provide strong benefits in particular in more demanding simulated robot manipulation tasks from raw pixel inputs. Finally, we develop an intuitive extension to encourage temporal abstraction and investigate differences in its impact between learning from scratch and using pre-trained options. | [
"Hierarchical Reinforcement Learning",
"Off-Policy",
"Abstractions",
"Data-Efficiency"
] | Reject | https://openreview.net/pdf?id=QKbS9KXkE_y | https://openreview.net/forum?id=QKbS9KXkE_y | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"9sq-97Yftzk",
"DfqNN8e_vEj",
"VZOL1dQI2Ky",
"VHeUjNEOlPN",
"ZVq7yzLIIX",
"yKYKyVgmn9m",
"VziVlhBSNSW",
"_smAERO5jtK",
"YhiSk-jYJFy",
"rFeLvUG0zvU",
"SgzYra0F3j",
"mO056NnKmAa",
"1RvEnlJgP1E",
"_4127M8OzA5",
"nHFvU4AxKGO",
"yss9T9HBI0Z",
"P1wv9_OyfV3",
"wjYjqwWtlh7",
"0uiGvO2uKSW",
"1HdufbjlTEQ",
"bnosUaXa3yB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040395361,
1606263203585,
1606261150540,
1606247408465,
1606247386989,
1606244294815,
1606143060612,
1605911724887,
1605911024929,
1605625041862,
1605625022756,
1605624893587,
1605624750527,
1605624704827,
1605624528009,
1605624484686,
1605624311243,
1604899412706,
1603936575180,
1603600483169,
1603185965131
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3646/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3646/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3646/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3646/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3646/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3646/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3646/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3646/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3646/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3646/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3646/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3646/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3646/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3646/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3646/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3646/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3646/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3646/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3646/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3646/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"There was a fair amount of discussion about the paper. Several reviewers felt that the paper would have been stronger if it tried to do less but better. The reviews describe in detail what the reviewers would have found compelling, but the key suggestion is to remove the complexity that is not essential for the approach to provide consistent improvements. Doing this requires a better understanding of the algorithm's behavior and a valid ablation study, a new concern raised during the discussion with the authors.\\n\\nThe reviewers felt that the proposed approach is potentially interesting and would like to see this paper done well.\"}",
"{\"title\": \"Feedback Part 2\", \"comment\": \"> Empirical evidence\\n\\nLet's have some nuance. Is your claim: \\\"HO2 outperform baselines over multiple seeds and many experiments\\\" true? First, it's impossible to know if it outperforms over multiple seeds since we are only shown an aggregate over seeds (maybe it only outperforms once, and fails four times, but on average still outperforms), so this claim certainly cannot be supported with evidence found in the paper. Second, let's tally the results:\\n* Figure 3: HO2 outperforms competitors convincingly in 1 of 4 plots by a margin of 20%, ties in other 3.\\n* Figure 5: HO2 outperforms baselines in 3 of 4 plots by a margin of [10%, 20%, and 15%] roughly judging by final performance . This is your strongest evidence by far.\\n\\nIn total, we have HO2 winning in 4 of 8 plots with a rough average of 15% improvement where it wins. I can equally conclude that HO2 \\\"outperforms baselines [...] over many experiments\\\" as I can conclude that HO2 does **not** outperform baselines over many domains. Further, on average across all domains including where HO2 ties, we expect an 8% improvement over competitors assuming that all 8 domains are totally independent (share no common features). Of course, this is a bad assumption since in Figure 5 all of the environments are quite similar to each other (different tasks on the otherwise same MDP), implying that 8% is an *overestimate*.\\n\\nBy performing this aggregating tally, I was willing to concede that there was some evidence that HO2 improves over baselines (I even ignored the fact that this is an overestimate). But that was using 8 plots for a total of 40 random seeds. Let's consider each ablation individually now to understand my point about not being able to make claims about the ablations.\\n\\nFigure 6 asks whether the hard-limit plays a role on performance, and how it impacts the option switching rate. The left figure says: no impact. The right figure says: big impact. In total, I couldn't tell you if there is no impact or a big impact. This ablation tells me very little. I don't find any other related figures in the appendix for this ablation. **Action item:** include more domains or include more random seeds to see if the left figure actually has an impact that is hidden behind the variance or try this test on a smaller set of domains where more environments/seeds could be tested.\\n\\nFigure 7 asks if conditioning on past actions plays a role on performance. The left figure says: don't condition, the other three say: no impact. In total, I don't know if there is an impact or not. Checking the appendix, and this strongly suggests there is no impact so this wasn't an interesting question. Not only this, but Figure 14 in the appendix suggests that the only appreciable impact was due to plotting choices, not due to actual underlying processes. Further Figure 7 is entirely dishonest in its implications and cannot be called an ablation. It does not control for all aspects between the two presented algorithms, other than the singularly manipulated variable. Instead it additionally: (1) cuts the size of the replay buffer of the poor performing line by 2 orders of magnitude, and (2) it changes the number of learning steps and data collection steps by 1 order of magnitude. **Action item:** conclude that there is no impact and relegate this to the appendix and make the comparison between algorithms more fair (I recognize the attempt to make this fair in Figure 14, but should note that this made the two algorithms perform almost identically).\\n\\nFigure 8 asks of the impact of the trust-region constraints. In its singular domain presentation, it appears there are massive impacts on performance. But checking Figure 15 in the appendix, it is clear that this is not consistently true and is only true in the cherry-picked domain presented in the main paper and it's most related sibling. In all other cases, the results are washed away by variance or are nearly identical. **Action item:** relegate this to appendix, or provide a more fair view across domains in main paper. Don't cherry-pick.\\n\\nI will point out some interesting details of the Henderson et al. paper. Tables 1 and 2 use 5 random seeds based on code from other works to demonstrate the inconsistency of results. They go on to further demonstrate this in Figure 5 where Henderson et al. uses 5 random seeds to demonstrate the insufficiency of 5 random seeds. The remainder of the plots from Henderson et al. use bootstrap confidence intervals with 10k bootstrap iterations; a far cry from using only 5 random seeds.\", \"edit\": \"I realized that the change in formatting from writing the response to posting it resulted in the bolded words being more aggressive than intended. I reworded and reformatted the paragraph about Henderson et al.'s work to make the language less blunt but still direct.\"}",
"{\"title\": \"Feedback\", \"comment\": \"The author response raises concerns of ambiguity in my feedback and seeks direct, actionable feedback. I will try to be even more direct here to reduce any potential future miscommunication. Unfortunately, directness tends to read as bluntness or even rudeness, so please note that the negativity (a) is biased due to the conversation being focused around what I perceive as being detriments of the paper and (b) is likely overstated in the attempts to be more clear. I will make sure my final recommendation of the paper reflects this.\\n\\n> Extended MDP\\n\\nSection 2 defines an MDP with statespace $\\\\mathcal{S}$, but with a policy $\\\\pi : \\\\mathcal{H} \\\\times \\\\mathcal{A} \\\\to [0, 1]$. These two statements are incongruent. I appreciate further context for why you care about histories, but this was already clear to me. I am less concerned with why and more concerned with if it is correct. **Action item:** clarify in Section 2 what setting you are in. Perhaps this can be as simple as citing the Sutton et al. 2009 paper in Section 2 and using their formalism.\\n\\nIn Equation 3, your parameterized policy $\\\\pi_\\\\theta (a_t, o_t | h_t)$ is a function of history. Equation 6 suggests that $Q_\\\\phi$ depends on $\\\\pi_\\\\theta$ and $q(a_t, o_t | h_t)$. Algorithm 1 (HO2) says component probabilities are determined via $\\\\pi^H (o_t | h_t)$ and actions are sampled from $\\\\pi_\\\\theta(\\\\cdot | h_t)$. I'll admit, the abundance of symbols and the time between my careful reading of the paper and now makes it difficult for me to interpret what is happening here. However, every single policy that the optimization for $Q_\\\\phi$ seems to depend on, itself depends on history $h_t$. The author response says this is untrue, that $Q_\\\\phi$ does not depend on $h_t$. **Action item:** clarify throughout Section 3 the dependence between $Q_\\\\phi$ and $h_t$. The current stated sentence is insufficient because it is not self-evident. If the graphical model says that $\\\\pi$ is dependent only on $o_t$, then change the notation throughout Section 3 to state that instead of the more general $h_t$.\\n\\n> Switch constrained extension\\n\\nThe clarification about the reward independence is not necessary, that part is clear to me. Admittedly, I am unsure how to make that more clear than my previous response, but I will try. My complaint is the following. The paper and author responses make a claim: a hard limit is easier to use than an additional term on the loss because the additional term on the loss depends on knowing the scale of rewards. However, there is no support for this claim in either the author response or the paper. As such, I must fall back on my own intuitions---which are fallible---and that makes me uncomfortable. My intuition says that I am far more likely to know the reward scale because I design the rewards myself for most real-world problems. I am highly unlikely to know how many times my algorithm should switch options though. I can use a priori information to set a reward scale, but it seems difficult to use prior information to set this new parameter. **Action item:** provide support for the claim that the new extension is easier to use than previous work. This could be done (for instance) by showing that the algorithm is insensitive to the new parameter. This could also be done by describing a real scenario where it is \\\"easy\\\" to know what the hard limit parameter should be, but where the practitioner won't likely know the reward scale.\\n\\n> Simpler setting\\n\\nI regret that I was unclear in my last response. I should have stated that this is in combination with the random seeds/empirical study complaint; these two comments should not be treated separately. As a whole, the complaint is that there is insufficient empirical evidence to support most claims in the paper. A possible solution (read: an **actionable item**) to avoid computational cost would be to use simpler, better thought-out settings.\"}",
"{\"title\": \"References for feedback\", \"comment\": \"[1] Levy, K. Y. and Shimkin, N. (2011). Unified inter and intra options learning using policy gradient methods. In Proceedings of the 2011 European Workshop on Reinforcement Learning.\\n\\n[2] Puterman, M. L. (2014). Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons.\\n\\n[3] Sutton, R. S., Precup, D., & Singh, S. (1999). Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2), 181-211.\\n\\n[4] Zhang, S., & Whiteson, S. (2019). DAC: The double actor-critic architecture for learning options. In Advances in Neural Information Processing Systems (pp. 2012-2022).\\n\\n[5] Smith, M., Hoof, H., & Pineau, J. (2018, July). An inference-based policy gradient method for learning options. In International Conference on Machine Learning (pp. 4703-4712).\\n\\n[6] Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., & Meger, D. (2017). Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560.\"}",
"{\"title\": \"Feedback\", \"comment\": \"Thank you very much for the additional feedback. We are pleased that the last rebuttal clarified some questions but see that there are still remaining misunderstandings and will try to clarify these in the following paragraphs.\\n\\n- Extended MDP\", \"to_start_with_the_most_important_point\": \"the option policy does only rely on the history h via the option o, and does not require explicit dependence on the entire history. This is described in the equations and graphical model but we will again strengthen this point in the paper.\\n\\nTherefore, the setting in this paper is identical to most previous work on option learning [1, 2, 3, 4, 5]: the semi-MDP. At each timestep, the option policy is only dependent on the current state and the previous option, as described in Equation 2, and the joint action-option probability can be decomposed following Equation 3. We have further clarified that the history-based notation was chosen to describe the connection to mixture policies.\\nSince the policy explicitly only depends on the option (and not the full history), the same also holds for the state-action value function Q which is now a function of state, action and option Q(s,a,o). We hope this fully addresses the second concern.\\n\\nIn addition to previous changes, we have now clarified this aspect in the method section and also more clearly referred to the semi-MDP framework.\\n\\n- Details for the switch constrained extension\\n\\nWe would like to start the answer for this point by emphasising that the core method does not require switch constraints and this is purely an extension (made possible by the structure of the inference graph). \\n\\nIn addition, we would like to clarify the argument on reward scale independence of this hyperparameter, which provides one of the big improvements over commonly used additional weighted costs. Essentially, this hyperparameter can be set independent of an environment\\u2019s reward scale since the method does not directly affect the objective which is being optimised but rather the paths through the inference graph which are used for achieving the objective. \\n\\n- Simpler setting\\n\\nWe agree on the importance of toy domains and use a domain which is common across related literature with the OpenAI gym experiments. However, we expect that the reviewer aims at even simpler domains which could help to further investigate different algorithm aspects.\\n\\nIn general, the recent option learning literature purely focuses on domains of the complexity of our simple OpenAI gym domains - in many cases, papers only use these exact domains (e.g. [4, 5]) . While we do not extend the common set of experiments to even simpler domains, we move in the opposite direction and improve the analysis in our experimental section by extending to more realistic domains and complex raw pixel inputs. \\nIn this way, our experiments are already of increased detail in comparison to recent publications. We have chosen to focus on scalability and robustness for this work because these domains are closer to a practical, real-world application. \\n\\n- Variance and number of seeds\\n\\nWe appreciate the reviewer recognising that HO2 outperforms baselines over multiple seeds and over many experiments. However, we would like to address the comments regarding that they \\u201cdo not think the degree to which it outperforms is substantial\\u201d, and \\u201cdo not believe any of the ablation studies provide any additional evidence that HO2 outperforms competitors when randomizing over particular factors.\\u201d\\nOur main concern is that these comments remain ambiguous and do not point to specific issues which we as authors can address but rather a general opinion of the reviewer. Without any specific missing experiments or missing ablations this renders it impossible to address, and in our opinion, fails to carefully consider the various ablations already performed in the paper.\\n\\nIn addition, the use of 5 seeds for experiments is common across RL publications and was in addition applied for most experiments in the paper previously cited by the reviewer ([6] and other references). \\n\\nWe would welcome actionable feedback regarding these points, and indeed have already taken previous comments to further improve the paper. However, as discussed in our previous response, we already have designed the experiments in the paper to rigorously measure the contribution of HO2 and ablate the impact of different components.\"}",
"{\"title\": \"Feedback\", \"comment\": \"We are pleased that the responses were helpful and are certain that addressing the reviewer\\u2019s comments has improved the paper considerably.\\n\\nWe have already improved the description of the baselines in the previous update but recognise the need to further clarify. The lines describe the performance after 2*10^6 steps as described in the caption and are taken from [1]. We have confirmed with the authors of the paper that the settings match exactly between the experiments. We chose to follow this direction to ensure that we take the best known results in this benchmark (a common practice in many fields e.g. computer vision, NLP and partially used in RL as well), instead of using a sub-optimal reimplementation of the algorithms which would potentially underperform existing results.\\n\\nUsing straight lines to indicate final results after the maximum training time of 2x10^6 steps instead of complete learning curves has two reasons. First, to prevent additional clutter in the graphs. Second, the learning curve comparison between on-policy and off-policy learning is only meaningful within limits. While we can align the number of actor steps, we cannot do so for learner steps as the ratio can be independently chosen in off-policy learning. \\n\\nWe have further emphasised these aspects in an updated version of the paper to ensure the readers are aware of this setting.\\n\\n\\n[1] Zhang, S., & Whiteson, S. (2019). DAC: The double actor-critic architecture for learning options. In Advances in Neural Information Processing Systems (pp. 2012-2022).\"}",
"{\"title\": \"Better empirical comparison\", \"comment\": \"I thank the authors for their detailed response, and welcome their revision of the paper. The response addresses most of my questions, and clarifies a few aspects of the paper. The revised paper now seems clearer, and I like the discussion on option clustering and Figure 9.\\n\\nMy only (relatively minor) question that remains is why Figure 3 shows horizontal lines for the on-policy baselines (DAC, Option-Critic, IOPG)? These baselines need samples to learn a policy, and I was expecting to see a learning curve (with \\\"steps\\\" being the number of time-steps executed in the environment, in that case). I'm also wondering for how long the baselines have been trained on the tasks, and I did not find that information in the paper (I may have missed it). Are these lines obtained by looking at the results presented in the respective papers of the baselines? I think that stating and motivating why the baselines are well-trained and strong is important to convince the readers that the proposed method outperforms a large variety of related approaches.\"}",
"{\"title\": \"Feedback Part 2\", \"comment\": \"> Simpler setting e.g. linear function approximation setting\\n\\nOn one hand, I totally agree that designing small toy problems that represent the real world is hard and is a waste of time, on the other hand I wonder if that is the wrong goal anyway. Rather I would like to see experimentation done in two steps (A) demonstrate the existence of a problem and show that I have a solution (B) show that this seems to help on real-world problems. A beautiful intermediary step would be to show that the problem in A exists in B, but that can be prohibitively hard and some amount of suspension-of-disbelieve is always appropriate for conference papers. This removes the burden of making A simulate the real-world and instead focuses on distilling exactly what problem the paper is proposing a solution to, then showing me that the paper solves that problem. A nice side-effect being that the clarity of the paper increases by an order of magnitude from this demonstration. Further, because toy problems are often cheap to run, statistical significance is no longer a tradeoff between compute and ability to support claims.\\n\\nBy skipping straight to the real-world domains, it isn't immediately clear that the proposed algorithm actually solves a problem. Right now, a reasonable explanation of the results that I see in Section 4 is that we rolled the proverbial experiment dice a few times and came out slightly ahead. But if first the reader is convinced that the proposed algorithm solves a concrete problem, and the reader is willing to believe that this problem exists in the real world, then now the 5 random seeds and lack of statistical significance tells a stronger story because it builds on a pre-existing structure and expectation.\\n\\n> Variance of results and number of seeds\\n\\nTaking some time to consider this further and I partially concede your point. Because there are so many experiments each with 5 random seeds (and at least most of these are definitely independent) and because the proposed algorithm outperforms or ties its competitor in nearly all of these additional domains, then we can assume that this demonstrates a superiority of the proposed algorithm over its benchmarks. However, there still is not enough evidence to support any single claim with the experiments due to the variance and number of seeds. So the depth of the ablation study is still entirely unconvincing for its purpose because of this.\\n\\nSo in summary, I agree that there is likely enough evidence to claim that HO2 outperforms RHPO and MPO on average, though I do not think the degree to which it outperforms is substantial. I still do not believe any of the ablation studies provide any additional evidence that HO2 outperforms competitors when randomizing over particular factors. I believe this is yet to be adequately supported empirically.\\n\\n## Summary after author response\\nIn light of the above, I so far intend to keep my recommendation as is. My primary reasons are (A) inconsistency across formal problem settings (B) no falsifiable support for the new meta-parameter and not strong intuitive support either (C) insufficient empirical support for claims in paper, with the exception of the implicit claim that HO2 outperforms competitors where I still find the degree of improvement may not outweigh the increased complexity of the proposed algorithm.\"}",
"{\"title\": \"Feedback\", \"comment\": \"I appreciate the detailed response!\\n\\n> Working with an extended MDP.\\n\\nI agree that this history-MDP is not novel and has been studied in the literature, but it is incongruent with the setting that the paper currently states it is solving. I think this could be a bit misleading, especially considering that the solution method in the experiments does not deal with the hidden difficulties inherent with history-MDPs (e.g. there is no recurrence, or historical inputs to the network, etc.). In this way, the paper reads like: \\\"Define problem A, develop theory for problem B, investigate solution on problem A\\\". Reading the paper right now, I might fool myself that this algorithm is searching for a solution in $\\\\mathcal{S} \\\\times \\\\mathcal{A}$-space since it appears to be an MDP problem with that state-space, but in reality the theory in Section 3 only tells me how to find a solution that is in an exponentially larger space $\\\\left( \\\\mathcal{S} \\\\times \\\\mathcal{A} \\\\right)^T$ where $T$ is number of timesteps of the longest possible trajectory. I don't want to sound alarmist and say that this makes the approach hopeless, but I am a little concerned that the paper seems to jump back and forth between an easier-to-solve problem and a scary-big problem very subtly. Concretely, I think the best resolution is to \\\"Define problem B, develop theory for problem B, mention that problem B is hard so investigate if solutions in the sub-problem A are worthwhile\\\".\\n\\nI only just noticed, but an additional concern on this front is this statement from the paper: \\\"Note that even though we express the policy as a function of the history $h_t$, $Q$ is a function of $o_t, s_t, a_t$, since these are sufficient to render the future trajectory independent of the past.\\\" This is untrue. A value function $Q$ is defined in terms of a policy (note that $Q_\\\\pi$ is the estimate of a return _following policy_ $\\\\pi$). This means the value function is additionally a function of $\\\\pi$ and thus is also a function of history. I have some concern that---held under a microscope---there are a few claims and implicit assumptions made in the paper that might break due to the mismatch between the formal problem statement and the algorithm derivation.\\n\\n> Details for the switch constrained extension\\n\\nThe paper and response both make the claim that setting a hard limit on switches is easier than a soft limit. My experience with setting hard limits pulls me to believe otherwise, so I remain skeptical. I had hoped for some support within the paper investigating this claim. Concretely, if I were to deploy this algorithm in a never-ending (or just long-term deployment) environment, I would be far more likely to know information like the scale of my rewards than the maximum number of times my algorithm should switch from one abstract option to another. In fact, I would likely think that my algorithm should infinitely many such switches and would simply want a penalization term to discourage switching more often.\\n\\nOf course, I don't want to impose my opinions on algorithm design here because I am very likely wrong. But I would like to be shown that my gut-reaction is wrong, instead of needing to take the claims on faith. A single sensitivity curve for $n$ would have gone a long way towards supporting this claim I think.\\n\\nI do appreciate the note that $n$ is never chosen to be 0. I think these sort of sanity checks cannot be overstated in an empirical section; they demonstrate that the algorithm is at least doing what it claims to be doing (a standard that not all RL papers aspire to dishearteningly).\\n\\n[ran out of space, continuing in a follow-up]\"}",
"{\"title\": \"Continuation\", \"comment\": \"- What do the options do? Do they correspond to goal-directed behaviour or trajectory snippets?\\n\\nThis is already explained in the paper in Section 4.3 (in brief) and in Appendix A (in much more detail). We have extended the analysis in the main paper for clarity.\\n\\nTo summarise, we applied HO2 to smaller and more interpretable domains, with walkers in go-to-target navigation tasks. We ran experiments with different bodies, different forms of information-asymmetry, and with/without the switching constraint between options.\\nIn short, we find that the options can represent both task-directed behaviours or short trajectories of motor behaviours (such as moving in certain directions). Additionally, information-asymmetry between high- and low-level controllers can lead to greater diversity of options as we demonstrate in Section 4.3.\", \"on_the_related_point_of_switch_constraints\": \"as the reviewer correctly identifies, they can be used to encourage temporal consistency. Even without these constraints, the options are already relatively consistent, with fairly low switch rates (Figure 6). Constraining the frequency of switching increases temporal consistency without hampering the agent\\u2019s ability to solve the task and improves performance for transfer with pre-trained options.\\n\\nIf not addressed explicitly in the author feedback, we will address the remaining minor comments directly in the paper. Please do not hesitate to emphasise remaining questions or concerns.\\n\\n\\n[1] Bacon, P. L., Harb, J., & Precup, D. (2017, February). The option-critic architecture. In Thirty-First AAAI Conference on Artificial Intelligence.\"}",
"{\"title\": \"Individual feedback per review\", \"comment\": \"We thank the reviewer for their detailed and constructive feedback. In this post, we will focus on reviewer specific feedback. We will additionally provide an overview post describing the general feedback and corresponding changes.\\n\\n- Motivating the use of options\\n\\nWe thank the reviewer for raising this point. With the additional page available to address such concerns, we have extended the introduction to more clearly motivate options and discuss the merits of such an approach before delving into the existing options literature.\\n\\n- Lack of clarity (figures, contributions, and multi-task learning)\\n\\nFigures 1 and 2 are used to support understanding of the option policies, mixture policies and temporal consistency parts in Section 3 by providing the corresponding graphical models. We believe that this will help understanding for readers who are more familiar with graphical models, sequence modelling, and related fields, while those more familiar with the option literature may benefit more from the derivations in Section 3. We have added additional references in the text when introducing the corresponding equations, improved the captions, and added some brief intuition to the start of each subsection.\\n\\nOur contributions are explicitly stated in bullet form at the end of Section 1. We additionally separate the method section with respect to flat Gaussians and mixture policies which have been trained via critic-weighted maximum likelihood in prior work and the proposed extension to train option policies. The current writing aims to find a compromise between clear separation of methods and clarity and self-containment of the method description. We have furthermore emphasized the use of MPO and RHPO to train Gaussian and mixture policies in prior work.\\n\\nFinally, since options represent reusable behaviour abstractions, they are known to be beneficial in a multi-task setting where they can be shared across tasks (see e.g. [1]). We additionally use multitask learning with related tasks to enable us to address more complex domains with acceptable data requirements. The described methods for multi-task learning have been proposed and tested in prior work and we merely use them here to accelerate learning. We have integrated the corresponding description into the experimental section and adapted it for clarification. \\n\\n- Clarity regarding the use of MPO\\n\\nWe use MPO (using flat Gaussian policies) and RHPO (using mixture policies) as the baseline methods for comparison with HO2, because they use an equivalent underlying optimization procedure based on critic-reweighted likelihood estimation. The fact that the base algorithm is equivalent makes it easy to compare the three approaches and to isolate the effects of action abstraction (which MPO does not have) and temporal abstraction (which both MPO and RHPO miss). We have extended the text to make this clear, and have also clarified that MPO trains a flat policy in the experiments, as suggested by the reviewer.\\n\\n- On learning a single degenerate option that solves everything\\n\\nThe reviewer is right to point out that in certain cases, hierarchical approaches can discover degenerate solutions such as a single option. In the case of HO2, the experiments found that with enough structure in the problem setup, a diverse set of options was learned and used - this is also supported and explained by the \\u201cFurther analysis\\u201d experiments in Section 4.3.\\n\\n\\n[Feedback continues in the next post]\"}",
"{\"title\": \"Individual feedback per review\", \"comment\": \"We thank the reviewer for their detailed and constructive feedback. In this post, we will focus on reviewer specific feedback. We will additionally provide an overview post describing the general feedback and corresponding changes.\\n\\n- Online/offline versus on-policy/off-policy learning\\n\\nIt is unclear if we correctly understand the comment regarding the use of online data as off-policy methods in general still use online data and will try to clarify in the following paragraphs. \\nIn general, this paper only focuses on the online reinforcement learning setting, and we do not address the offline RL problem (where the policy must be learned entirely from existing data, without interaction). However, HO2 is an off-policy reinforcement learning algorithm, meaning that it can learn from interactions that do not (only) come from the current policy itself. \\n\\nThe experiments show that (a) common existing approaches to hierarchical RL (which are on-policy) can be outperformed by strong non-hierarchical off-policy methods like MPO, thus showing the criticality of learning off-policy; and (b) our proposed off-policy hierarchical approach outperforms other methods with the same underlying optimization algorithm, such as RHPO and MPO, showing the benefit of a hierarchical off-policy method. We have added more discussion to clarify some of these nuances.\\n\\n- Unclear notation in Algorithm 1 and what are \\\\pi' and Q'\\n\\nWe addressed the following points directly in the paper. \\\\pi' and Q' respectively represent the target policy and target Q-function as previously indicated via the corresponding comment in the algorithm. We have clarified this aspect and provided additional information for other algorithm parameters.\\n\\nIf not addressed explicitly in the author feedback, we will address the remaining minor comments directly in the paper. Please do not hesitate to emphasise remaining questions or concerns.\"}",
"{\"title\": \"Continuation\", \"comment\": \"- Structure that HO2 is able to take advantage of that RHPO is unable to replicate\\n\\nThe core difference between HO2 and RHPO is the ability to represent temporal abstraction with respect to the options (i.e. the ability to explicitly model the continuation of the behaviour of an option instead of resampling at every timestep). This requires the presented changes in the Q-function and the modelling of high-level option probabilities via dynamic programming. As pointed out by the reviewer the aspect seems to be particularly beneficial in the visually complex 3D manipulation tasks. This suggests that the additional structure for exploration is particularly helpful here. We have further clarified this in the paper.\\n\\nIf not addressed explicitly in the author feedback, we will address the remaining minor comments directly in the paper. Please do not hesitate to emphasise remaining questions or concerns.\\n\\n\\n[1] Levy, K. Y. and Shimkin, N. (2011). Unified inter and intra options learning using policy gradient methods. In Proceedings of the 2011 European Workshop on Reinforcement Learning.\\n\\n[2] Puterman, M. L. (2014). Markov decision processes: discrete stochastic dynamic programming. John Wiley & Sons.\\n\\n[3] Sutton, R. S., Precup, D., & Singh, S. (1999). Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning. Artificial intelligence, 112(1-2), 181-211.\\n\\n[4] Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., & Meger, D. (2017). Deep reinforcement learning that matters. arXiv preprint arXiv:1709.06560.\"}",
"{\"title\": \"Individual feedback per review\", \"comment\": \"We thank the reviewer for their detailed and constructive feedback. In this post, we will focus on reviewer specific feedback. We will additionally provide an overview post describing the general feedback and corresponding changes.\\n\\n- Working with an extended MDP\\n\\nYes, it is correct that the options framework in general (not just our method) extends the MDP. Our paper uses the usual semi-MDP model [3] which is common across option frameworks. Although our notation in Equation 3 remains general in that the policy at time t is dependent on the entire history, this dependence is later refined to the kind of dependence that is described in the semi-MDP framework (i.e. the dependence is mediated by the option which is described in Equation 4). We do not discuss semi-MDP framework in detail in the paper, but there are relevant discussions to be found for example in [1, 2, 3]. In addition, we have clarified this aspect in the paper.\\n\\n- Details for the switch constrained extension\\n\\nThe problem of controlling when to switch between options is a known challenge (e.g. [1]). For most domains, it is usually not known a priori what would be appropriate switching behaviour. The possibility to explicitly optimise for temporal consistency via the switch constraints, which can further improve performance, is an additional feature of HO2 (as shown in Figure 6). \\nOur presented approach aims to simplify tuning this aspect. It is easier to tune than another weighted cost term (as e.g. given in [1]) since it can be chosen independently of the reward scale and does not directly cause a potential conflict with other reward terms. In addition, setting how many times the agent should maximally switch between options along a trajectory can be more intuitive than an additional cost term, which is missing a similarly easy semantic interpretation. We will emphasise this further in the paper.\\n\\nIf the switching limit prefers to stay small, would this suggest that the best form of the algorithm is one without the options framework at all (e.g. the best version of the proposed algorithm is an unaltered actor-critic algorithm)?\\nThe algorithm often converges to a low rate of switching but never (in our experiments) degrades to the single option case (which would be equivalent to a non-hierarchical policy). This shows that even when the agent would be able to change its behaviour to represent the \\u2018unaltered actor-critic algorithm\\u2019, there are benefits in using multiple options. Further, note that MPO trains a flat single actor-critic model, and yields poorer performance. We have clarified this in the experiments section.\\n\\n- Simpler setting e.g. linear function approximation setting\\n\\nSimplifying evaluation to obtain more general insights is an important point. However, designing toy domains that still include the relevant aspects which render the real-world problems hard is challenging on its own.\\nWe investigate option learning in particular in the context of deep models as these can represent solutions to more complex tasks which share more aspects with real-world control problems. While sharing some of the challenges with deep models, the optimisation of linear models is commonly affected by different aspects of the optimization problem.\\n\\nWhile clearly multiple factors affect performance (as the reviewer suggests), we carefully ablate over these in the experiments to identify their effect. These experiments include:\\n1) Comparison to current on-policy option algorithms to estimate the importance of off-policy learning in HO2.\\n2) Comparison to flat and mixture policies (MPO and RHPO) with equivalent underlying policy optimization to independently evaluate the benefits of temporal and action abstraction.\\n3) Ablation over the use of switching constraints, action-conditioning, off-policyness, and robustness via trust region constraints.\\n4) Analysis and interpretation of option decompositions for simpler tasks.\\n\\n- Performance differences on the simpler benchmarks and in general\\n\\nWe have changed the paper to be more specific with respect to domains where HO2 provides the strongest performance gains. However, across all domains, the proposed method performs either better than existing methods or at least on par. \\n\\n- Variance of results and number of seeds\\n\\nWe agree that variance and significance of results in reinforcement represent a critical point. Commonly, authors have to trade-off between increasing accuracy of estimates and acceptable computational cost. We run 5 seeds per algorithm per task (a number common across RL papers and also used in [4] for most experiments). We see consistent results across many tasks with stronger benefits dominant in more complex tasks.\\n\\n\\n[Feedback continues in the next post]\"}",
"{\"title\": \"Continuation\", \"comment\": \"- Robot experiments: difficulty in evaluating the performance differences and longer learning curve\\n\\nTo better understand the differences in performance, one can look at the maximum rewards in a task and the number of steps per episode. Details for these aspects are given in the appendix and we will also improve the description to provide additional intuition in the paper. \\nAll four tasks displayed in Section 4.2 use sparse binary rewards, such that the obtained reward represents the number of timesteps where the corresponding condition - such as the ball is in the cup - is fulfilled. \\nEven with the given duration, the most important points are present in Figure 5. The learning curve clearly shows the individual improvements in data-efficiency from the introduction of a mixture policy and the extension to include temporal abstraction via the option policy. Both improve performance throughout most tasks.\\n\\n- Equation 4 in detail and the reason for low-level policy term\\n\\nThis directly follows from an application of Bayes rule. The equation computes the probability of the option o_t at time t given past states and actions. We compute it by marginalizing out the previous option. Now, since o_{t-1} affects a_{t-1}, we need to take the previous action that was actually chosen into account when performing the marginalization. In other words, we treat the options as hidden variables and use states and actions as observed variables, both of which hold information about the sequence of options. We have added the above brief intuition to the paper.\\n\\nIf not addressed explicitly in the author feedback, we will address the remaining minor comments (e.g. duplication of references, additional information for the algorithm) directly in the paper. Please do not hesitate to emphasise remaining questions or concerns.\\n\\n[1] Harb, J., Bacon, P. L., Klissarov, M., & Precup, D. (2017). When waiting is not an option: Learning options with a deliberation cost. arXiv preprint arXiv:1709.04571.\"}",
"{\"title\": \"Individual feedback per review\", \"comment\": \"We thank the reviewer for their detailed and constructive feedback. In this post, we will focus on reviewer specific feedback. We will additionally provide an overview post describing the general feedback and corresponding changes.\\n\\n- Additional analysis of the behaviour of the algorithm; conditions under which it improves performance; types of emerging options\\n\\nWe agree that these aspects are important to understand. We have clarified and extended existing analysis sections. Section 4.2 focuses on differences to a simple Gaussian policy and a mixture of Gaussians policy (which can be understood as a single-step options model). In these sections, we analyse the impact of action abstraction (the ability to control by choosing from a set of individual skills/options) and temporal abstraction (the ability to model consistent behaviour enabled via the termination conditions). Furthermore, Section 4.3 includes the investigation of the impact of off-policy training, trust-region constraints and different aspects affecting the types of option decompositions. For improved understanding, we will include further details in the main paper regarding the properties of learned options based on conditions such as switch constraints and information asymmetry (information only provided as input to a part of the agent) between high- and low-level controller.\\n\\n- Clarity of writing\\n\\nThanks for pointing this out. We have gone through the paper carefully and changed any sentences that may have been unclear. In particular, we adapted the introduction to render the paper more accessible and improve overall readability. Please feel free to use the openreview diff function to trace these changes.\\n\\n- Gym experiments: On-policy and off-policy option learning; comparing mixture and option policies\\n\\nIn general, off-policy algorithms have a considerable advantage over on-policy methods as they enable multiple updates for the same data samples. This advantage grows when training with stochastic gradient descent and only small adjustments per update. With respect to our experiments, we have explicitly clarified on which domains off-policy learning (including non-hierarchical policies) outperformed existing work on on-policy option learning. Additionally, we have emphasised the equivalent details in the RHPO and HO2 comparison.\\n\\n- Gym experiments: results from DAC, OC, IOPG\\n\\nWe agree that this is an important figure and there is no perfect way of doing this comparison. We've attempted to do it in a fair manner and will clarify the motivation behind this type of comparison below.\\n\\nAs described in the caption, we use previously obtained results in the gym domains from prior work and have additionally contacted the authors to ensure that the environments and experiment setting are exactly the same. We chose to follow this direction to ensure that we take the best known results in this benchmark (a common practice in many fields e.g. computer vision, NLP and partially used in RL as well), instead of using a sub-optimal reimplementation of the algorithms which would potentially underperform existing results.\\n\\nUsing straight lines to indicate final results after the maximum training time of 2x10^6 steps instead of complete learning curves has two reasons. First, to prevent additional clutter in the graphs. Second, the learning curve comparison between on-policy and off-policy learning is only meaningful within limits. While we can align the number of actor steps, we cannot do so for learner steps as the ratio can be independently chosen in off-policy learning. We will further emphasise the description of the lines in the caption (which we use instead of an additional legend) to ensure the reader is aware of this setting.\\n\\n- Details for the switch constrained extension\\n\\nThe problem of controlling when to switch between options is a known challenge (e.g. [1]). For most domains, it is usually not known a priori what would be appropriate switching behaviour. The possibility to explicitly optimise for temporal consistency via the switch constraints, which can further improve performance, is an additional feature of HO2 (as shown in Figure 6). \\nOur presented approach aims to simplify tuning this property. It is easier to tune than another weighted cost term (as e.g. given in [1]) since it can be chosen independently of the reward scale and does not directly cause a potential conflict with other reward terms. In addition, setting how many times the agent should maximally switch between options along a trajectory can be more intuitive than an additional cost term, which is missing a similar semantic interpretation. Empirically, we have found performance in the learning from scratch results particularly robust with respect to different values of the constraint. We have emphasised this further in the paper.\\n\\n[Feedback continues in the next post]\"}",
"{\"title\": \"General feedback and changes\", \"comment\": \"We thank the reviewers for their detailed and constructive feedback. Below we comment on common points and highlight some changes that we have made to the manuscript in response.\\n\\nWhile most reviews appreciated the perspective, analysis and performance improvements in the submission, multiple reviewers suggested changes to improve the clarity of the manuscript and asked for additional analyses of the learned options as well as for further details regarding the constrained version of the algorithm.\\n\\nTo improve clarity we have made use of the additional ninth page. This has allowed us to include many details which had to be omitted from the submission to remain within the page limit.\\nIn particular, we have expanded the general discussion of option models and included additional details regarding prior work which will hopefully make the paper more self-contained. We have also expanded the technical description in the methods section, as well the analyses of the results.\\n\\nOne focus of our paper is to analyze the impact of different algorithmic aspects on performance and type of learned behaviour. The existing evaluations include the comparison of the impact of action abstraction (via mixture policies) and temporal abstraction (via option policies), the additional optimisation to increase temporal abstraction (via the switch constraints), the impact of trust-region constraints, off-policy learning and finally information asymmetry. To account for the additional requests in the reviews, we extended the corresponding sections and in particular added results regarding the behaviour decomposition via options in Section 4.3.\\n\\nThe possibility to explicitly optimise for temporal consistency via the switch constraints, which can further improve performance, is an additional feature of HO2 (as shown in Figure 6). Optimising temporal consistency can generally be challenging. Our presented approach aims to at least partially simplify this step. It is easier to tune than another weighted cost term (see details in individual responses) since it can be chosen independently of the reward scale and does not directly cause potential conflicts with other reward terms. \\nIn addition, setting how many times the agent should maximally switch between options along a trajectory can be more intuitive than weighting an additional cost term, which is missing a similarly easy semantic interpretation. We have emphasised this further in the paper.\\n\\nPlease find the detailed individual feedback separately posted under each review. We thank the reviewers again for their help in strengthening the paper and hope that all questions have been answered in the individual sections.\"}",
"{\"title\": \"Well-motivated paper that presents a new algorithm for hierarchical reinforcement learning using the options framework. Empirical results demonstrate gains in data efficiency in a number of problems but do not provide substantial insight into the behaviour of the algorithm.\", \"review\": \"The paper introduces a reinforcement learning algorithm with temporal abstraction using the options framework. It provides empirical results in a variety of domains, demonstrating that the algorithm can improve data efficiency.\\n\\nThe paper is well motivated. Data efficiency is an important concern in applications of reinforcement learning. The approach is sufficiently novel. The empirical results are positive, showing performance improvements in a variety of domains. Results in simulated robotic manipulation tasks are particularly positive, as measured by average return. \\n\\nThe paper can be improved by providing additional analysis to better understand the behaviour of the algorithm and the conditions under which it improves performance. For instance, it would be useful to see what type of options are being learned in the various domains and with different limits on the number of switches allowed. \\n\\nI had difficulty in evaluating the importance of the performance differences in Figure 5. In the main text of the paper, there is no information on the reward structure of the task. Additional information is present in the appendix but I could not easily locate the relevant information (if it is indeed there). For instance, it would be useful to know what the behavioural difference is between average returns of 60 and 100. \\n\\nIn figure 3, the number of switches are listed as 5. Some experimentation with various different values would be informative. In figure 5, seeing a longer training period would be informative. MPO and RHPO are still improving at the end of the learning curve. \\n\\nPlease explain Equation 4 in some detail. I could not follow. In particular, I do not follow why the $\\\\pi^{L}$ term is there.\\n\\nThe paper is not easy to read. Many sentences do not communicate the intended meaning clearly. As an example, the first paragraph of the introduction would be difficult to understand by readers who are not already familiar with hierarchical reinforcement learning, the options framework, and the papers cited. And some of the writing is not clear regardless of the background of the reader. For instance, the first paragraph ends with \\\"Overall, the interaction of algorithm and environment can become increasingly difficult, especially in an off-policy setting (Precup et al., 2006).\\\" It is not clear what is meant by a \\\"difficult\\\" interaction here.\\n\\nThe writing could be more nuanced in the discussion of results presented in Figure 3. The authors write that \\u201coff-policy learning alone [..] improves data-efficiency and can suffice to outperform on-policy option algorithms such as DAC, IOPG and Option-Critic.\\u201d This is not true in every domain. Similarly, the authors write that they \\u201cachieve improvements when training mixture policies via RHPO\\u201d. Again, this is not true in all four domains. For instance, in Hopper-v2, RHPO lags behind MPO. \\n\\nIn the pseudo code for Algorithm 1 on page 5, please specify inputs to the algorithm. And please do not use any undefined symbols (e.g., $\\\\pi\\u2019$, $Q\\u2019$). \\n\\nThere is no reference to Figure 6 in the text. \\n\\nIn Figure 3, please include DAC, OC, and IOPG in the figure legend. \\n\\nIn Figure 3, the way the performance of DAC, OC, and IOPG are presented on the plots is misleading. For each algorithm, a constant value is shown from step 0 until step $2 \\\\times 10^6$ although these constants correspond only to average return obtained after $2 \\\\times 10^6$ steps. Furthermore, my understanding is that these numbers have been taken from Zhang & Whiteson (2019). For best experimental practice, these algorithms should be tested by the authors themselves along with the other algorithms shown in the plot (e.g., HO2). This would ensure that the performances are truly comparable and that there are differences in relevant experimental settings. In addition, it would allow the reader to compare the algorithms along the entirety of their learning curves.\\n\\nDoina Precup's dissertation is listed twice in the references, with different publications years. \\n\\nMisuse of the comma is prevalent throughout the paper. \\n\\nThe author response answered some of my questions. But I cannot say that I now better understand the behaviour of the algorithm and the conditions under which it improves performance. I agree with reviewer 2 that analysing behaviour and performance in a simpler setting would be informative. While the writing has improved, it stills lacks the clarity and nuance one would wish to see in a paper at this conference.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #2\", \"review\": \"## Summary\\nThis paper introduces a novel option-learning policy gradient method, HO2. The method learns a parameterized joint distribution over options and actions and uses a soft-continuation based approach to interrupt or \\\"switch\\\" between options before option termination. The method introduces a new meta-parameter which enforces a hard limit on the number of \\\"switches\\\" that can occur, significantly reducing the variance of the option-learning method and replacing softer loss penalization based approaches. The paper demonstrates the performance of the proposed algorithm on a handful of 3D virtualized environments as well as on robotic simulation tasks.\\n\\n## Review\\n### Summary\\nI am currently leaning towards recommending reject for this paper. While the approach and algorithm are novel, they also appear to be highly complex, don't provide a noticeable or consistent improvement over much simpler benchmarks, and I fear that the improvements that _are_ seen are likely due to variance in the results or are hidden behind the additional machinery in the algorithm. I remain slightly skeptical of the utility of the proposed meta-parameter $n$ for setting a hard limit on the number of \\\"switches\\\" between active options, with my skepticism primarily due to concerns on the difficulty of tuning this parameter and the domain-specificity of the parameter.\\n\\n### Details\\nI'm curious about some of the hidden complexities in the algorithm. In the problem setup, a policy is defined on an MDP as possibly being $\\\\pi(a | h)$ where $h=\\\\{s_t, a_{t-1}, s_{t-1}, a_{t-2}, \\\\ldots, s_0\\\\}$, that is a full history of interactions. First, I suppose this means we are no longer dealing with the original MDP and are working in a modified MDP where the \\\"Markov\\\" state is a full history; already leading towards an exponential growth of the state-space. The algorithm itself depends on a recursive product of distributions for the entire length of a sampled trajectory as a result of this adapted Markov state. I'm initially worried at how difficult it is to keep this probability from decaying towards 0 rapidly. There appear to be multiple partial marginalizations (e.g. $\\\\sum_{i = 0}^M p(X | Y = y_i) p(Y=y_i)$) which require rescaling the final product to stay within the standard simplex, perhaps this is used to prevent the product of distributions from decaying towards 0?\\n\\nOne of the primary motivations of the hard switching limit is that an auxiliary penalization on the objective is hard to tune. However, it isn't clear to me that the hard limit parameter $n$ would be any easier to tune. In fact, because the inclusion of $n$ seems to require more partial marginalizations, it almost seems as if this would cause additional complexity in the optimization problem. Did you find this meta-parameter easy to tune? What are the effects of choosing it to be $n=5$ for the experiments instead of (say) 10? It appears to play a very mild variance reduction role in the results (though with only 5 seeds, we _really_ can't say much about variance since this is severely underestimating variance). If the switching limit prefers to stay small, would this suggest that the best form of the algorithm is one without the options framework at all (e.g. the best version of the proposed algorithm is an unaltered actor-critic algorithm)?\\n\\nThe experiments in the paper start with deep neural networks with all of the necessary machinery to make Deep RL run at the moment, including experience replay, target networks, ADAM optimizer, layer norms, mini-batches, various types of activations on each layer, neural networks of different architectures for each of the three sets of weights, different stepsizes for each of the networks, etc. I fail to see why this approach couldn't have been studied in a much simpler linear function approximation setting where statistically significant results with fewer confounding variables could have been achieved. As it stands, it is entirely unclear to me if the proposed algorithm actually provides any benefit when, in the midst of all of the machinery, the modifications above the benchmark algorithms are modest. Given this, there certainly is something to be said for a novel algorithm that does perform favorably when included in the machinery of a Deep RL feat of engineering.\\n\\nI , however, am unsure if the proposed algorithm does perform favorably. From the results, it appears that in most cases RHPO performs equivalently to the proposed; certainly not statistically significantly different. With only 5 random seeds, it would be very hard to make sound claims; especially considering the known variance issues in Deep RL (take Henderson et al. 2018 for a deeper discussion). The one place where the proposed algorithm _does_ outperform RHPO is in the robot simulator (though again with only 5 seeds, heavy skepticism is called for). I found this fascinating and am curious if there is some structure that the proposed algorithm is able to take advantage of in this domain that RHPO is unable to replicate.\\n\\n# After discussion period:\\nI have read all other reviews, resulting conversation, and have read the edits to the paper. After extended conversation with the authors and a deeper investigation into the empirical components of the paper, I find I have further concerns than originally realized in my original review and that many of my original concerns remain. I am lowering my recommendation from a 5 -> 3 to reflect the new concerns; namely the validity of the ablation study as detailed in-depth below.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This paper proposes an efficient option learning method based on TD(0) type objective. The overall objective relies on both action abstraction and temporal abstraction, an interesting ablation study is given to understand more the effect of each individual component.\", \"review\": \"This paper studies an important area in RL, hierarchical RL, which improves data efficiency by incorporating abstractions. In this paper, the authors proposes an efficient option learning algorithm, which utilizes a TD(0) type objective and constrains the learned policy being not too far away from the past policy. In terms of different abstractions, the paper studies action abstraction through a mixture policy, and temporal abstraction through explicitly limiting the maximum number of switches between options.\\n\\nThe method is well-motived in general, however, I feel the notation in Alg 1 is a little bit unclear, what is \\\\pi' and Q'? The ablation study is well-done, to separate the effects of different types, and gives practitioners some useful guidelines. There is one thing I am curious about, do you try the methods using some online data? Since the paper argues the improvement compared with online option learning, it would be great to also have some experiments using online data for a fair comparison. \\n\\nI am not that familiar with hierarchical RL, so I could not give a fair judgement of the novelty compared with previous option learning literature. In terms of quality, it is a well-motivated work, clearly written in most part of the paper and gives a method with reasonably good empirical performance. I feel the off-policy argument in this paper is less clear, is it just achieved by using a Q-learning based method? This can also be used online, and how is the online-version compared with the actor-critic option learning method?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Very interesting paper but difficult to follow\", \"review\": [\"The paper considers the Hierarchical Reinforcement Learning setting, Options in particular, and proposes an algorithm that allows to learn both the high-level and low-level (option) policies at once, from off-policy samples. An original aspect of the algorithm is that it is easy to constrain the learned policies on how often they terminate an option and start a new one. This prevents the agent from learning tiny options that immediately terminate. It is unclear whether it can also be used to prevent the agent from learning a single big option that does everything.\", \"The paper is very interesting and has a high educational value. It combines many different approaches and is mathematically sound. The empirical evaluation shows encouraging results. However, the significance of the empirical results is difficult to measure, due to a few cons of this paper:\", \"The paper is quite difficult to follow, and requires several attentive reads to be understood (by someone having a deep knowledge about options, intra-option learning and the Option-Critic architecture). I believe that the lack of clarity comes from the brevity of the paper, that has to fit in the page limit. I would suggest the authors to remove Figures 1 and 2, that take place while still being very difficult to understand. The text helps understand the figures, the figures do not help understand the text, so I would remove the figures. \\\"Problem setup\\\" in the preliminaries could also be removed, and replaced with an introductory paragraph in \\\"Method\\\", that clearly states what the contribution will be, and what are its main components/properties.\", \"\\\"multi-task learning\\\", just before \\\"Experiments\\\", is then used in the experiments to increase the sample-efficiency of all the algorithms. Because it is used for all the algorithms and not ablated, I don't see how it contributes to the paper. I think that the paper already proposes many ideas, and that multi-task learning could be omitted and left for a future paper.\", \"In the experiments, comparing MPO with HO2 allows to see a benefit from the use of options, with the proposed learning algorithm. This is a good point. However, MPO is not a well-known baseline, and a quick glance at the plots does not allow the reader to see that MPO does not use options. I would suggest either to add a little \\\"no options\\\" next to MPO in the figures, or to replace it (or add to it) PPO, ACKTR, ACER or the Soft Actor-Critic (SAC). These algorithms are well-known baselines. Not all of them are off-policy, but I believe that comparing HO2 to state-of-the-art algorithms, without restriction to off-policy ones, would better allow to illustrate all the gains to be obtained by HO2.\", \"A good argument for the use of options is to use them for explainability: telling an observer what is the current option, and what it tries to achieve. Is it possible to have a small discussion of what do the options learned by HO2 do? Do they aim at goals, or do they perform small bits of trajectories?\", \"In summary, I like the proposed algorithm, the core contribution of the paper, the contents of Section 3. However, the clarity of this paper is to me too low, and prevents fully grasping the impact of the proposed research. I therefore recommend against acceptance, and invite the authors to remove parts on multi-task, figures that do not help, and use well-known baselines in their evaluation. With the space gained with these changes, more text can be spent summarizing what the contribution will be, and motivating the use of options.\"], \"author_response\": \"the authors answered my question about the absence of learning curves, and provided extra details. However, I still think that the paper could be clearer and more focused, a sentiment that I think I share with the other reviewers. Given my hesitation, I would therefore not vote for accepting this paper, but I acknowledge that the proposed method is original and interesting, so I would not mind if this paper were to be accepted.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
v-9E8egy_i | Gated Relational Graph Attention Networks | [
"Denis Lukovnikov",
"Asja Fischer"
] | Relational Graph Neural Networks (GNN) are a class of GNN that are capable of handling multi-relational graphs. Like all GNNs, they suffer from a drop in performance when training deeper networks, which may be caused by vanishing gradients, over-parameterization, and oversmoothing. Previous works have investigated methods that improve the training of deeper GNNs, which include normalization techniques and various types of skip connection within a node. However, learning long-range patterns in multi-relational graphs using GNNs remains an under-explored topic. In this work, we propose a novel GNN architecture based on the Graph Attention Network (GAT) that uses gated skip connections to improve long-range modeling between nodes and uses a more scalable vector-based approach for parameterizing relations. We perform an extensive experimental analysis on synthetic and real data, focusing explicitly on learning long-range patterns. The results indicate that the proposed method significantly outperforms several commonly used relational GNN variants when used in deeper configurations and stays competitive to existing architectures in a shallow setup. | [
"graph neural networks",
"GNN",
"long-range dependencies",
"deep GNN",
"relational GNN"
] | Reject | https://openreview.net/pdf?id=v-9E8egy_i | https://openreview.net/forum?id=v-9E8egy_i | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"M5wvDoC7S1T",
"SHkdF09iHH5",
"ttRbDwJH5X",
"v7T89pn2GdK",
"7R-1Pl01niw",
"y4nQV0crk8t",
"18e-vYxXJBh",
"kMfy43gu1w",
"sk-2xxBdxDM",
"lkLR6SBHf_B",
"QVQqjQzxphp",
"JEBS70p279y",
"MQC1SF8RmJp",
"El5akkWU2Ma",
"RTLVgcUQL3i"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040428990,
1606162013237,
1605721949998,
1605718158801,
1605716571142,
1605715712691,
1605715187868,
1605690237964,
1605690210801,
1605634271704,
1605633936968,
1604028252844,
1603850843312,
1603828986106,
1602826582516
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3645/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3645/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3645/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3645/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3645/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3645/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3645/Area_Chair1"
],
[
"ICLR.cc/2021/Conference/Paper3645/Area_Chair1"
],
[
"ICLR.cc/2021/Conference/Paper3645/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3645/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3645/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3645/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3645/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3645/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes a GNN architecture for multi-relational data to better address long-range dependencies in graphs. The proposed GR-GAT model is a variant of graph attention networks (GAT) with, among other modifications, vector-based edge type embeddings and GRU-type updates. Results are presented on AIFB, AM, and on synthetic benchmarks.\\n\\nThe reviewers agreed that this is an interesting contribution and that the results on the chosen synthetic benchmarks are insightful, but that experimental evaluation on real data and overall motivation of the architecture is lacking. In the rebuttal period, the authors have improved the writing and strengthened the motivation of the paper. However, given the limited amount of time, the authors were not able to sufficiently address the lack of experimental validation on real data (beyond AIFB & AM). I am inclined to agree with the reviewers that this paper needs significantly more work on the experimental evaluation, the overall presentation needs to be refined and it needs to more carefully analyse the effect of each individual architectural modification to meet the bar for acceptance.\"}",
"{\"title\": \"Rebuttal revision\", \"comment\": \"Dear readers,\\n\\nWe just updated our manuscript using your feedback. We are looking forward to receive your feedback on this updated version.\", \"the_following_changes_have_been_done\": \"Section 3 (motivation) is added to better motivate the particular problem we are tackling.\\n\\nSection 4 is reordered, shortened, and updated to better motivate the different design choices. In particular, the SGRU has been moved to the beginning (4.1), the description of the attention mechanism (4.2) is shortened (omitted text is in Appendix) and motivated better (first paragraph), and additional motivation is provided for the message function (4.3).\\n\\nThere are some significant changes and additions in experimental section (Section 5) as well: \\n(i) For the first task (5.1), we verify that the poor performance of GGNN is not due to a deeper network. \\n(ii) Here, we also augmented the ablation study to include a direct comparison to a model consisting only of a SGRU, compared to a GRU.\\n(iii) We change the second task/synthetic dataset (5.2) to a more challenging, semi-supervised version and ran an additional setting that illustrates the importance of the SGRU as opposed to the GRU.\\n\\nThank you for reading!\"}",
"{\"title\": \"Follow-up\", \"comment\": [\"Thank you for your response.\", \"I hope the problem and the proposed solution will be well motivated and explained well in the updated version.\", \"As you also agree that the GraphLSTMs can similarly better model horizontal dependencies, it would be useful to also include it in the experimental analysis.\", \"I strongly recommend to include a backpropagation based analysis for dense+residual connections and GraphLSTMs, similar to GGNNs. This will strengthen the motivation of the paper significantly.\", \"Only two real-world datasets are reported and that too on AIFB the proposed model is compared against only two baselines. At the minimum, I would like to see results for WGCN which was compared on the other dataset.\", \"I partially agree with your motivation to not evaluate the graph classification task but not on the link prediction task. Albeit, it is not a deal-breaker for me provided that the node classification experiments are strong.\"]}",
"{\"title\": \"Response to authors\", \"comment\": \"Thank you for your response.\\n\\nHope the future version of this paper could be improved, especially in the aspect of motivation and method description.\"}",
"{\"title\": \"Follow-up on initial response\", \"comment\": \"Thank you for your response.\\n\\nFor now, I will await the updated motivation and explanation for individual model components, and results on OGB datasets. \\nIn my opinion, having such results (as well as making sure that max-aggregator architecture doesn't already solve the max-tree task), is critical before the paper is (close to) being ready.\\n\\nLess importantly, I would like to highlight that my GaAN suggestion didn't mean to say that your entire work is not novel.\", \"to_clarify\": \"I think at least the SGRU component is novel. The GaAN reference is just given as an indication that the idea of gating an attention mechanism, in itself, is not novel.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for taking the time to review our work and providing helpful comments.\\nWe are updating our paper using your comments and provide it to you as soon as possible.\", \"see_below_for_detailed_responses_to_the_concerns_you_raised\": \"(1) We agree that the changes we propose build on well-known principles in neural network design. However, the concrete method we propose is novel. Especially, to the best of our knowledge, it is the first that focuses on the problems associated with breadth-wise backpropagation. We are not aware of any existing architecture within the message-passing framework that would not have these problems. \\nIt seems like we did not do a good job of differentiating our design decisions from other works and motivating our design decisions but we will improve our explanations using the additional space in the elaborated version.\\n\\n(1a) Thank you for the paper, we will add it to our references. Based on your comments, we realized that the proposed message function has not been motivated properly. But we would like to highlight that our main contribution there is not vector-based parameterization, which has been used in several previous works, as we mention in the paper, but the improved backpropagation compared to the previously proposed models. However, with the extra space, we will more clearly explain the motivation.\\n\\n(1b) The changes are indeed minor. However, removing the value transformation seems to be essential. Without it, the entire method appears to significantly underperform.\\n\\n(1c) We will run additional experiments for the first task to study the SGRU in isolation. We already ran some additional ablation comparisons on the second task where the SGRU provides improvement when coupled with \\\\mu_MM. We will add these additional results to the updated version of our paper.\\n\\n(2) Finding a good dataset with long-range dependencies that has been used by previous work has been a major issue for this work and we thank you for suggesting the molecular datasets from OGB. We will be working on these experiments. We would also appreciate more dataset suggestions.\\nOne problem in general with graph classification tasks though, could be that the final graph pooling layer is already enough to capture distant interactions and thus, the changes we propose would not lead to improvement. This is also why we focused on node classification tasks. What do you think about this argument? Would you suggest any node classification datasets that could be useful here?\\n \\n(3) We will provide details about the datasets in text.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thank you for taking the time to review our work and providing helpful comments.\\nWe are updating our paper using your comments and hope to provide it to you as soon as possible.\", \"see_below_for_detailed_responses_to_the_concerns_you_raised\": \"(1) We agree that the changes we propose build on well-known principles in neural network design. However, the concrete method we propose is novel. Especially, to the best of our knowledge, it is the first that focuses on the problems associated with breadth-wise backpropagation. We are not aware of any existing architecture within the message-passing framework that would not have these problems. \\nIt seems like we did not do a good job of differentiating our design decisions from other works and motivating our design decisions but we will improve our explanations using the additional space in the elaborated version.\\nPlease note that the proposed SGRU is different from the GRU typically used in GGNNs and actually aims to solve the shortcomings of GGNN\\u2019s GRU.\\nAlso, please note that in addition to vector-based parameterization, we use a message function that improves backpropagation compared to, for example, CompGCN. This property is more interesting to us than the fact that the relations are parameterized using vectors. We will improve the explanation in the paper to better convey this.\\n\\n(2) The explanation and motivation should indeed be improved, as other reviewers also pointed out, and will be a major focus point for the next version of the paper. Specifically, we will more clearly explain the problem we are addressing (improving breadth-wise backpropagation), reorganize the approach section and motivate the decisions more extensively. Currently, the motivation for the breadth-wise backpropagation is extensively discussed only for the GGNN, in appendix.\\n\\n(3) We agree that we need more experiments on real data with hopefully more conclusive results. We will try to run experiments on some molecular datasets from OGB. We would also appreciate any suggestions for additional synthetic and/or real tasks.\"}",
"{\"title\": \"Author response\", \"comment\": \"Dear AnonReviewer1,\\n\\nWe are now entering the second discussion stage. Could you please check whether the authors have addressed your concerns and questions and potentially ask any further clarification questions?\\n\\nThank you,\\nYour Area Chair\"}",
"{\"title\": \"Author response\", \"comment\": \"Dear AnonReviewer4,\\n\\nWe are now entering the second discussion stage. Could you please check whether the authors have addressed your concerns and questions and potentially ask any further clarification questions?\\n\\nThank you,\\nYour Area Chair\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thank you for taking the time to review our work and providing helpful comments.\\nWe are updating our paper using your comments and hope to provide it to you as soon as possible.\", \"see_below_for_detailed_responses_to_the_concerns_you_raised\": \"--> \\u201cMy main concern is that the authors do not properly motivate most of their design choices, or properly ground them\\u2026\\u201d and \\u201cThe best-motivated addition---the SGRU update---is in my opinion a pretty neat idea and should be more highlighted...\\u201d\\nWe will more clearly justify the issue with backpropagating horizontally that is present in the most popular message passing networks and present the SGRU more clearly as a solution for that problem. We pushed the discussion for GGNN to appendix but we have more space now.\\n\\nThe SGRU was the first change from GGNNs we investigated but found that it alone was insufficient to solve the toy tasks. The other additions were also motivated by improving the backpropagation to neighbour states, which we will motivate better. We will reorganize the explanation of the approach and use the additional space to extend the motivation and discussions.\\nWe will also run experiments for the first task to test the SGRU and GRU in isolation. In addition, we will try to run additional ablations for the tree max task to see whether the SGRU improves performance compared to other update options. Do you have any other tasks in mind?\\nWe would also like to remark that GaAN implements a predictor of head importance, and thus enables the model to learn to ignore certain heads after aggregation. Compared to this, the (S)GRU node update function enables element-wise control (rather than head-wise). Also, note that GaAN doesn\\u2019t address the problem of learning long-range patterns during graph encoding.\\n\\n--> \\u201cWhile the GR-GAT model achieves some strong outcomes on synthetic benchmarks, the strength of these results is, in my opinion, insufficient to carry the weight of the paper\\u2026\\u201d\\n\\nWe agree that more experiments on real data will strengthen the paper. To this end, we will experiment with molecular datasets from OGB.\\nRegarding the tree max task, in the latest version (which we will provide as soon as possible), we use a more challenging semi-supervised version of the task (which is now supervised at every node).\\n\\nThank you for the paper suggestion! \\n\\nWe can try to run some max-aggregated baselines and/or ablations on tree max to verify your concerns. Which settings do you think would be most interesting?\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"EDIT: formatting\\n\\nThank you for taking the time to review our work and providing helpful comments.\\nWe are updating our paper based on your comments and hope to provide it to you as soon as possible.\", \"see_below_detailed_responses_to_the_concerns_you_raised\": \"(i) We will improve the explanation of the problem and improve the introductory and motivating parts of the paper to more clearly focus and define the issue with backpropagating \\u201chorizontally\\u201d in GNNs.\\n\\n(ii) Perhaps the name \\u201csymmetrically gated\\u201d is not the best here. We use it to refer to the fact that both inputs (which in the case of a GNN corresponds to the aggregated information from the neighbours) and states are treated similarly, and both backpropagation to the neighbourhood aggregation vector as well as previous node states benefits from the same long-range modeling advantages of normal GRUs. With the extra space now, we will extend the discussion in 3.3.1, so that it better explains the contribution and better follows on the improved motivating parts (see also the previous point).\\n\\n(iii) The proposed method is also motivated by the properties of TreeLSTMs and GraphLSTMs, because in these models the information is propagated horizontally better than in the standard GNNs.\\nRegarding Graph LSTMs in general, we found several variants with small differences in the literature. For example, Peng et al. 2017 uses neighbour-specific forget gates. Compared to that work, we use an attention mechanism instead of forget gates, which have to learn independently. We will also discuss this in related work.\\n\\n(iv). That\\u2019s a good suggestion. We will try a residual version of the gated functions in the future.\\n\\n(v). In table 2, we have some ablations of the components separately, including using \\\\mu_MM instead of the proposed message function. We currently provide GR-GAT(Ident) - value transformation and GR-GAT(SGRU). We can include GR-GAT(SGRU) - value transformation. But because of the many small design choices, it is difficult to provide an extensive ablation due to many possible combinations. That\\u2019s why we focused on the presented ones, which we deemed most important.\\n\\n(vi). We will try to run experiments on the molecular datasets from OGB.\\n\\n(vii). We ran such an analysis and found that there is no clear picture where more distant nodes get much worse performance than closer nodes. However, perhaps this is not very surprising given that all nodes are equally far from each other in terms of update distance, as we shortly illustrated in the discussion for the first experiment.\\nGR-GAT(GRU) vs GR-GAT(Ident): The Ident variant does not have an update function and relies solely on the attention mechanism to build representations. The GRU performs worse because of the \\u201chorizontal\\u201d backpropagation problem we outlined earlier and elaborated on in more detail in Appendix (the GGNN discussion).\\nSGRU and Gate are not the same. The gate variant reuses the same architecture as in 3.1. We will drop the gate variant from the discussion.\\n\\n(viii) WGCN on AIFB: We could not find WGCN results for AIFB in the literature.\\n\\n(ix) We focused primarily on node classification experiments because (1) it seems like an easier training setup than link prediction and (2) graph classification relies on a final pooling layer that builds a single representation of the graph. To elaborate on the last point, we think that in graph classification, many patterns spanning a large distance can be captured by the final pooler so if you just take care of the depth-wise skip connection, you could get similar performance as a model that also backpropagates better \\u201chorizontally\\u201d. In node classification, however, every node has a potentially different representation. So a node classification task seems like a more challenging setup.\\nThat being said, we will try to include additional experiments on the molecular datasets from OGB, which include graph classification tasks.\"}",
"{\"title\": \"Interesting analysis but writing needs improvement + Need more real world datasets\", \"review\": \"\", \"summary\": \"The authors propose a new gating based recurrent graph attention networks for multi-relational graphs to capture long-range neighbor dependencies. The authors provide an interesting analysis of current gated GNN models (in the appendix + Figure 3) in light of their ability to capture long-range dependencies in graphs. Experimental results are reported for node classification with two synthetic datasets and two real-world datasets. \\n\\n\\u2014\\u2014\", \"pros\": \"(i) The work addresses an important problem of long-range dependencies with conventional graph neural networks. The work has an interesting backpropagation based explanation of the issue in the Appendix and Figure 3 provides a good illustration of the same. \\n\\t(ii) The synthetic experiments are interesting and helpful to evaluate models for long-range dependencies \\n\\t(iii) Impressive results on synthetic experiments \\n\\t\\n\\u2014\\u2014\", \"major_concerns\": \"(i) Though the problem of interest is explained well in the appendix from the view of Gated GNNs. The paper's main section lacks a clear explanation of the problem; especially, there is no explanation of what it means by the horizontal vanishing gradient problem. \\n\\n\\t(ii) The model motivation is not clearly written in the main paper \\u2014 why model both the inputs as states of a gated recurrent network. Eqn: 10 is not straightforwardly clear why the redundant combination of information is helpful. Overall, the primary contribution discussed in section 3.3.1 needs to be expanded and explained in contrast to GRU update to capture long-distant neighbor information. A similar backpropagation analysis for the proposed model will help us understand the power of the proposed model. Also, why is the model called 'symmetrically' GRU? \\n\\n\\t(iii) The r_x gate in Eqn: 9 is similar to the forget gates in LSTMs. How does it compare with the GraphLSTM updates? \\n\\n\\t(iv) Residual and Dense connections are not discussed and experimented. Like in JK-Nets, dense connections can be added to the each of the GCNs pertaining to different relations or can be added for the combined layer output from all relations. The baselines with highway connections need to be evaluated. \\n\\n\\t(v) There are too many components or design choices proposed/made, but there are no ablation studies on all the components. (a) Gated relational message, (b)Redundant usage of relational information for attention keys, (c) Concat-Ensemble of value vectors (d) symmetric GRU. Even in the current set of variations, would like to see, GR-GAT(SGRU) - value transformation, GR-GAT(SGRU) \\n\\n\\n\\t(vi) Results are reported only for two real-world datasets. MUTAG and BGS can be added. No real-world datasets or tasks with potential long-range dependencies are experimented. Ex: Molecular graphs (MUTAG, ZINC, etc. ) and Protein graphs.\\n\\n\\t(vii) Tree Max:\\n\\t\\t- A height wise results+analysis of node-level task would be interesting.\\n\\t\\tIt is especially hard to understand why the GR-GAT(Ident)- value transform performs poorly. On the same note, how does GR-GAT(SGRU) - value transformation perform?\\n\\t\\t- Why does GR-GAT(GRU) perform way poorer than GR-GAT(Ident) ? \\n\\t\\t- The text about model variations mentions GATE and SGRU to be the same but in Table: 2, there is both GR-GAT (SGRU) and GR-GAT(Gate). \\n\\t\\n\\t(viii) WGCN results on AIFB ?\\n\\n\\t(ix) Only node classification results reported, in which case the scope of the model and results studied should be explicitly mentioned to be restricted to node classification if that is the intent. If that is not the case, additional link prediction results like RGCN or other tasks should be reported. \\n\\n\\t\\tAlso, Additional results on single-relational homogeneous graphs can help disentangle the effect of the proposed relational module from the main contribution, the gating mechanism.\", \"overall_recommendation\": \"The paper has interesting content, but the paper is not well organized and motivated well. There is sufficient merit if the analysis could be reformulated and generalized for all message-passing models \\u2014 the horizontal long-term dependency. The backpropagation based vanishing gradient issue discussed is limited to Gated GNNs alone. It is essential to discuss residual and dense connections too. On the experimental front, adding more real-world datasets would strengthen the paper. \\n\\u2014\\u2014\\nPost Rebuttal \\nIncreased the score from 6 to 7. \\nI would have strongly recommended the paper if it had more real-world datasets.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Promising long-range tweaks to the relational GAT, but lacks clear motivation\", \"review\": \"The authors propose Gated Relational Graph Attention Nets (GR-GAT), a set of modifications to the GAT architecture in order to make them stronger under long-range relational reasoning, evaluating on various hand-crafted benchmarks as well as a real-world dataset previously explored in the area.\", \"the_gr_gat_tackles_all_aspects_of_the_graph_neural_network_pipeline\": \"- Message function that takes into account the edge type embedding, and features explicit gating over the sender node's past features, and usage of the CELU activation;\\n- Multi-head attentional aggregator which splits the value vectors into chunks, rather than replicating them;\\n- Symmetric approach to the GRU update rule, which factors both inputs into the gating.\\n\\nThe three approaches are potentially meaningful for ameliorating various issues with long-range reasoning, such as overfitting or vanishing gradients, and I think the methodology from this paper could be useful to GNN practitioners. However, the paper's current presentation and motivation does not feel suitable for a venue like ICLR, in my opinion.\\n\\nMy main concern is that the authors do not properly motivate most of their design choices, or properly ground them in a particular issue with (R)GATs or GGNNs. Many choices (to name a few: the gating message, the CELU activation, or the splitting value vector) are name-dropped in the paper, without properly explaining their significance in the architecture. The ablation studies and experimental discussions are also lackluster in this sense, in my opinion: the authors' discussion doesn't go much further than \\\"the results demonstrate method X outperforms method Y\\\", not managing to provide deeper insight into any of the design choices.\\n\\nThe best-motivated addition---the SGRU update---is in my opinion a pretty neat idea and should be more highlighted and motivated in the paper, perhaps with experiments specially designed to show its benefits. Gated attention as well as split messages have already been featured or attempted (in some form) by prior work: see, for example, the GaAN model from Zhang et al. (UAI 2018), hence the novelty of such proposals, in isolation, is limited.\\n\\nWhile the GR-GAT model achieves some strong outcomes on synthetic benchmarks, the strength of these results is, in my opinion, insufficient to carry the weight of the paper, especially for a venue like ICLR. Especially considering that these graphs are designed with rather simple edge types (such as edge direction), the value of these results when transferred to real-world heterogeneous graphs is unclear. Further, the max-tree benchmark, where most of the interesting ablations are shown, is a bit concerning: it appears as if it could be quite easy to solve if using max-aggregation rather than attention (see Richter and Wattenhofer's \\\"Normalized Attention Without Probability Cage\\\" for some motivation on this), and maybe in this sense would not require any of the \\\"heavy artillery\\\" proposed here. \\n\\nAs mentioned above, it would be interesting to create more targeted and diverse synthetic benchmarks, perhaps to specifically battle-test the SGRU component.\\n\\n========= Post-rebuttal update:\\nI thank the authors for carefully addressing my comments, as well as other reviewers'.\\n\\nUltimately, this is a nice paper with a novel recurrent component, and I can see how it could perform well in practice.\\nHowever, the lack of stronger real-world experimentation (on datasets such as OGB) unfortunately renders the contribution insufficient -- the synthetic benchmarks being insufficient on their own to pull the weight of the paper.\\n\\nI retain my score, but encourage the authors to carefully revise and resubmit for the next venue should the paper be rejected.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Recommendation to reject\", \"review\": \"#####Summary#####\\n\\nThis paper proposes a new GNN model (GR-GAT) for multi-relational graphs. The proposed method has better ability of capturing the long-range information. Essentially, the proposed GR-GAT is modified from GAT so that it can apply to the multi-relational graphs. Since the modifications are common and frequently used techniques, the novelty of this work is not enough. Also, why these modifications can help to capture long-range information is not well explained in this paper. Overall, this work is ok but not good enough for ICLR.\\n\\n#####Pros#####\\n\\n(1) The experiments on synthetic are well designed and can show the power of the proposed GR-GAT. \\n\\n(2) This paper studies a meaningful and challenging problem; that is capturing long-range information using GNNs.\\n\\n#####Cons#####\\n\\n(1) The novelty of the proposed method is not enough. Based on the description in Section 3, the proposed model is basically under the message passing framework (Eq (1)) and the concrete implementations of the three functions included in the message passing framework is not novel. For example, parameterizing message function using vector, aggregating neighborhood using attention, and updating using GRU are all popular used techniques in GNNs. Overall, this method is likely to be a combination of GAT and some basic techniques for multi-relational graphs.\\n\\n(2) More importantly, the motivation is not explained clearly. The current version did not well explain why the proposed model can help to model long-range dependencies in Section 3. \\n\\n(3) To show the ability of capturing long-range dependencies, the experiments are only conducted on small synthetic datasets and specific defined tasks. It is not enough to show the ability of capturing long-range information. More experiments and comparisons on large real-world datasets should be considered.\\n\\n#####Suggestions for improvement#####\\n\\n(1)\\tI think the main issue of the current version is that our readers cannot tell the contributions of this paper from Section 3. We cannot find what is the key proposal of this paper and how this proposal can help to capture long-range information. I think if this point can be improved, the novelty and motivation of this paper can be clearer.\\n\\n\\n######\", \"update\": \"After looking at the revised version, I would like to raise my score to 5 since the motivation is clearer.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Limited architecture novelty, no convincing performance on real tasks.\", \"review\": \"## Summary\\nThis paper presents a graph attention architecture that captures long-range interactions. The novelties in the architectures are (1) vector-based parameterization of edge type in modeling message, (2) slight modification of graph attention (Section 3.2), and (3) GRU-based node update function. The experiments are primarily on synthetic tasks. However, it is unclear if modeling such long-range interaction is useful in real tasks. The paper fails to demonstrate convincing results on the real tasks of entity classification in knowledge graphs.\\n\\n## Pros\\n1. Detailed architecture explanation.\\n2. Good performance on synthetic tasks.\\n3. Careful design of synthetic tasks.\\n\\n## Cons:\\n1. The novelty of architecture is limited as detailed below.\\n- Vector-based parameterization of edge type in the relational graph has been commonly adopted in GNNs for molecular graphs (e.g., Eq (1) in https://arxiv.org/pdf/1709.04555.pdf), and not novel. \\n- The modifications of graph attention architecture are rather minor (removing a single linear transformation, adding edge type embedding in the key of the attention mechanism). \\n- GRU-based node update function is not empirically shown to be beneficial although being highly complicated.\\n2. Experiments are largely synthetic, and no convincing results are provided for the real datasets on entity classification in knowledge graphs. It is unclear if modeling long-range interaction is useful in practice. One domain long-range interaction could be useful is molecule classification, where you can treat molecular graphs as multi-relational graphs and those graphs tend to have large graph diameters. Many datasets are readily available [here](https://ogb.stanford.edu/docs/graphprop/).\\n3. Details of the real knowledge graph datasets (AIFB and AM) are not provided in the main texts.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
h9XgC7JzyHZ | Efficient estimates of optimal transport via low-dimensional embeddings | [
"Patric Fulop",
"Vincent Danos"
] | Optimal transport distances (OT) have been widely used in recent work in Machine Learning as ways to compare probability distributions. These are costly to compute when the data lives in high dimension.
Recent work aims specifically at reducing this cost by computing OT using low-rank projections of the data (seen as discrete measures)~\citep{paty2019subspace}. We extend this approach and show that one can approximate OT distances by using more general families of maps provided they are 1-Lipschitz. The best estimate is obtained by maximising OT over the given family. As OT calculations are done after mapping data to a lower dimensional space, our method scales well with the original data dimension.
We demonstrate the idea with neural networks. | [
"optimal transport",
"sinkhorn divergences",
"robustness",
"neural networks",
"lipschitz",
"spectral norm"
] | Reject | https://openreview.net/pdf?id=h9XgC7JzyHZ | https://openreview.net/forum?id=h9XgC7JzyHZ | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"SxwkOxh_k-",
"AcdlLXPgvMa",
"RaPagfyGNjz",
"0hNXn0MARw",
"ks5E4y8nGZ",
"BnCQQnJaEyE",
"hnlK5BJR8Lz",
"Oi5lrxPp8V",
"tQgm2nhJUv",
"UejE_cpyaca"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040395133,
1606245276839,
1606245194236,
1606244916150,
1606244881333,
1606244712235,
1604042986940,
1604008853502,
1603748328660,
1603137968348
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3641/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3641/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3641/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3641/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3641/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3641/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3641/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3641/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3641/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"We thank the authors for their submission. The paper feels more like an early draft, with several fundamental factual mistakes (mistake on computational and statistical complexities) as highlighted by the reviewers. There's plenty of material in the reviews to help authors improve their submission, we encourage them to use these recommendations to improve motivation / experiments.\"}",
"{\"title\": \"General remarks\", \"comment\": \"We thank all the reviewers for their detailed feedback and appreciate the help. We would like to answer first with some general comments.\\nIndeed, the curse of dimensionality is attributed to the statistical estimation of OT distances, and to the sample size of the empirical distribution, not to the dimension of the support space, we have changed this error in our submission to reflect that. There is a bit of an enigma here. Although in theory the dimension of the data should have linear impact on the computational complexity of computing OT (using Sinkhorn\\u2019s algorithm), in practice, as illustrated in Fig 4, computing OT systematically in the lower dimensional space seems to make a significant difference. It must be that a similar phenomenon shows in the Generalised Sliced Wasserstein and Sliced Wasserstein papers, where authors find that projecting OT problems in lower dimensions (d=1 in their case) shows in practice to be far more efficient.\"}",
"{\"title\": \"Answers to Reviewer 4\", \"comment\": \"Thank you for your very detailed feedback and positive review, it has helped us improve our paper. We have implemented most of your suggestions.\\n\\nRelated to b)\", \"you_are_completely_correct\": \"problem (4) only provides lower bounds for W(d_X) provided all maps in S are 1-Lipschitz. This is why during optimisation we renormalize linear layers to make sure the NN is non-expansive. We made that clear in our new version.\\n\\nRelated to your points a) and c) \\nwe followed the same approach as Paty & Cuturi to justify robustness and compared to their experiments, and we have very similar results for a synthetic dataset. Real data would indeed be better. \\nin terms of efficiency, computing the approximation with a projected stochastic gradient method means we only need to compute OT in the low-dimensional space at any iteration, thus making our method scale linearly with the dimension of the support. This can be seen from Figure 4) where we show the relative times for computing the approximation via Paty&Cuturi SRW and our GPW stochastic method. Figure 3) shows the normalised values for the approximation in both cases. \\n\\nEq. 6 is the same equation as seen in Patty&Cuturi 2019 Theorem 1, eq. 4. In order for the pushforward operator to make sense, it needs to be applied to the power 1/2 of \\\\Omega and is equivalent to computing the Mahalanobis distance in practice. In their particular case, \\\\Omega is a p.s.d matrix with trace k (dimension of subspace). \\nThis is one of the fundamental differences between their approach and ours. Because of their construction, they cannot optimise in \\\\Omega^{1/2} but only in \\\\Omega.\\nGreat point about the reference to d_X in (6), we will change it to point to d_Y, the euclidean metric on \\\\calY, the subspace of \\\\calX. \\n\\nPage 5, paragraph starting with \\u201cFor the second problem...\\u201c: Is there any guarantee that the procedure of renormalization of the linear layers described in this paragraph somehow approximates the projection of the mapping onto the set of 1-Lipschitz maps ? Furthermore, I guess that it is necessary to stress that the activation function should be 1-Lipschitz as well.\\nThe normalization approximations are better when the spectral norm is more exact, i.e. the power iteration method is ran for more time. \\n\\nIndeed, activations need to be 1-Lipschitz as well, something we refer to later in the computational details section. We can include a remark in this paragraph as well.\", \"page_6\": \"\\u201cin order to project back into the space of constraints we need to normalize each layer\\u2019s weights with the spectral norm\\u201d I am not sure that \\u201cwe need to\\u201d is appropriate here. It might very well happen that some of the linear layers have a spectral norm larger than 1 but the resulting NN is 1-Lipschitz. I recommend replacing \\u201cwe need to\\u201d by \\u201cwe sggest to\\u201d.\\nCorrect, only the total resulting NN needs to be kept 1-Lipschitz. Less heavy-handed ways to ensure it could be rewarding as you suggest. \\n\\nSection 5.2: The number of weights depends on the dimension d. Therefore, the computation of the gradient wrt phi requires a running time that is an increasing function of d. Furthermore, the projection onto the space of 1-Lipschitz functions is done by computing the spectral norm of a matrix of dimension qxd, where q is the number of units in the hidden layer. If k<d and power method is used for computing this norm, it involves at least k^2*d computations. It is therefore not clear why the authors insist on the fact that the \\u201cthe time to compute SD_phi is constant in dimension\\u201d, while what really counts is the overall computational cost of the method. On a related note, in Fig 4, it would be more relevant to show the running time of the algorithm and not just the time of computing SD_phi for a given phi.\\nThe full algorithm involves a feedforward/backward pass and spectral norm computation as well as the OT problem. Since the iteration in Algorithm 1 to compute SD_phi involves the normalisation step as well, what Figure 4 illustrates is the relative running time for the full algorithm. As mentioned in the same paragraph, we run the power method for 5 iterations in that specific case and vary only the dimension d. However, you are entirely correct and the statement should be \\u201clinear in dimension\\u201d instead of constant (as we are looking at log scale). It is still a major improvement compared to the exponential case for the method we compare with.\\nShould we understand that the first paragraph of section 5.1 relates to section 5.2 as well? If yes, please move it before section 5.1. If not, please provide more details on the experimental setting of section 5.2.\\nYes, same setup, moved the first paragraph.\"}",
"{\"title\": \"Answer to Reviewer 1\", \"comment\": \"Thank you for your review and for the comments made, they will help strengthen our paper in a future submission.\\nIn relation to your comment about previous methods and generalisation. \\nOur method applies to a more general family of maps, than the projections considered by Paty & Cuturi and, as a consequence, obtains estimates for OT distances that are equally good and cheaper to compute in higher dimensions. We would like indeed to apply our method on real data and compare it further to other approaches for OT approximation such as Sliced Wasserstein.\"}",
"{\"title\": \"Answers to Reviewer 3\", \"comment\": \"Thank you for your review and for the comments made, they will help at a future submission.\\nPlease see see the overall response above for your other points.\\n\\u201cOn the optimization side, the problem once parametrized is non-convex and was probably already non convex on the space of Lipschitz mappings. Comments on these points are of interest. In other words, how hard is the optimization problem introduced by the authors?\\u201d \\nThe map $\\\\Phi: Lip(X,Y)\\\\to \\\\mathbb R_+$ defined as $\\\\Phi(f) = W(f(d_Y))(\\\\mu,\\\\nu)$ for fixed $\\\\mu$, $\\\\nu$ is convex and so $Lip_1(X,Y)$, so our starting problem is convex albeit with a infinite-dimensional space $Lip_{1}(X,Y)$. It would be very interesting to see if general methods from convex optimisation on Banach spaces lead to other approaches. As you suggest the S-families of neural networks which we consider are not convex.\"}",
"{\"title\": \"Answers to Reviewer 2\", \"comment\": \"Thank you for your review and for the comments made, they will help strengthen our paper in a future submission. We have some good preliminary experiments on how the method behaves on mixtures of gaussians.\"}",
"{\"title\": \"Needs more results\", \"review\": \"The paper lists a general approach to compare probability distributions. In particular it generalizes the approach by \\\\cite{Paty and Cuturi, 2019} to include arbitrary projections. In my mind, the idea of the paper is nice although I am not sure of its novelty. However, currently this seems a preliminary attempt to me (although a good one) rather than a complete paper. An extensive theoretical treatment needs to be carried out to truly establish the utility of this metric under high-dimensional circumstances. There needs to be more experimental setups that need to also be checked. In particular how does this method behave for heavy tailed distributions. When does this lose its speed? How does it perform for estimation of distances for mixture distributions? All of these questions would help strengthen the claim of superiority of the metric mentioned. I would encourage the authors to resubmit a more complete draft at a future submission.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Maximizing Lipschitz embeddings for Wasserstein distance estimation.\", \"review\": \"This paper uses the fact that the Wasserstein distance is decreasing under 1-Lipschitz mappings from the ambient space to a feature space in order to propose more robust (to dimensionality ?) estimation of the Wasserstein distance between two probability distributions. A neural network with some sort of weight renormalization is used to produce 1-Lipschitz embeddings. The authors then maximise over the parametrized maps.\\n\\nThe paper is not well-written and is sometimes simply wrong in my understanding. For instance, the second sentence of the abstract mention that \\u00ab\\u00a0they scale\\u00a0cubically \\u00bb, \\u00ab\\u00a0they\\u00a0\\u00bb is not defined and the curse of dimensionality in optimal transport is not related to cubic scaling. My opinion is that the authors misunderstood the curse of dimensionality in optimal transport which is related to the statistical estimation of the Wasserstein distance (that is the convergence when increasing the number of samples is quite slow).\\n\\nThe model and its instantiation are quite obvious; a generalization of linearly projected optimal transport (Paty, Cuturi).\\nOn the optimization side, the problem once parametrized is non-convex and was probably already non convex on the space of Lipschitz mappings. Comments on these points are of interest. In other words, how hard is the optimization problem introduced by the authors?\\n\\nExperiments lack a clear motivation and the synthetic experiments are not particularly illuminating. In fact, it is difficult to clearly state the problem addressed in this paper.\\n\\nIn the algorithm, proj^\\\\lampda_{1 - Lip} is not defined.\", \"minor_remarks\": \"\\u2014 f_\\\\phi instead of f_phi on page 6.\\n\\u2014 page 8: \\u00ab\\u00a0projects\\u00a0\\u00bb projections\\n\\u2014 page 5: \\u00ab\\u00a0Both problems already have solutions which are going to re-use\\u00a0\\u00bb we are going?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Mostly trivial statements, lacks comparisons\", \"review\": \"This paper addresses estimation of a certain type of OT distance, substance robust wasserstein distances, by Patty and Cuturi, 2019\\n\\nI don't think this paper is suited for publication since it lacks enough substance. \\n\\n1)There is gross error in the abstract: the curse of dimensionality doesn't refer any cubic scaling, but on an exponential dependence in dimension.\\n2)the first 4 pages are spent on elementary definitions. This appears as an unnecessary padding. I suggest authors put that kind of definitions in the appendix and/or cite relevant literature, e.g. the paper by Patty and Cuturi.\\n3)The overall idea, although sensible, appears unjustified. Why would the community be interested in this problem? In the current papers, authors claim they are generalizing the results of Patty et al. Nonetheless it is unclear whether there is reasons for wanting to create such generalized framework. It would be helpful if the authors had a concrete application to showcase their results.\\n4)Experimental results are weak, and comparisons with other methods are lacking, so it is hard to judge what are the actual gains.\", \"rating\": \"2: Strong rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The paper defines a pseudo-metric between two distributions based on a set of mappings from a low-dimensional space to a high-dimensional one. An algorithm for approximating this pseudometric when the mappings are neural networks is provided without any guarantee. Some experiments are reported. I find the contributions rather weak for being accepted in ICLR.\", \"review\": \"The paper introduces the notion of the generalized projected Wasserstein metric (GPW) as a pseudo-metric, associated with a set of mappings, between two probability measures.\\nWhen applied to the parametric set of mappings defined by a neural network with an output layer much narrower than the input layer, GPW can be used for performing nonlinear dimension reduction. The authors propose to replace the Wasserstein distance used in the definition of GPW by the Sinkhorn divergence and to solve the resulting optimization problem by combining the Sinkhorn algorithm and the projected gradient descent. \\n\\nThe paper is written as a sequence of historical remarks, literature review, definitions of some notions, and statements of their properties, without clearly defining what is the problem under consideration and what is the most important novelty leading to the main contribution. More precisely, the authors write (the last paragraph of the introduction) \\\"[...] we introduce a general framework for approximating high-dimensional OT using low-dimensional projections by finding the subspace with the worst\\nOT cost [...]. By taking a general family of parameterizable f\\u03c6s that are 1-Lipschitz, we show that our method generates\\na pseudo-metric and is computationally efficient and robust.\\\" \\n\\n a) As far as I understand, what the authors call \\\"general framework introduced in this work\\\" is the definition (4). \\nI would not qualify this definition as a new general framework.\\n \\n b) Contrarily to what is claimed, it is not clear that problem (4) approximates the high-dimensional OT. In particular, the\\n metric d_X, playing a central role in the high dimensional OT between mu and nu, is not used at all in the definition of d_S.\\n This metric appears implicitly when the function class is specified and chosen to be 1-Lipschitz wrt d_X. But still, it is not clear to me why the GPW can be thought of as an approximation of high-dimensional OT. \\n\\n c) There is no sufficient justification in this paper for claiming that the pseudo-metric defined by d_S is computationally efficient and robust. Furthermore, I would very much appreciate if the authors could elaborate on how the computational efficiency and the robustness should be understood within this work. \\n\\n*** More specific remarks\\n\\n- page 4, line 2: I guess \\\\mathscr P X should be replaced by \\\\mathscr P(\\\\mathscr X)\\n\\n- line -2 of section 3.1: should d/kS_k^2 be understood as (d/k)S_k^2? Please add parentheses to avoid any possible misunderstanding. \\n\\n- Eq 6 is not clear at all. The sentence that follows this equation does not really clarify it. There is not d_X in (6), why the sentence after (6) contains d_X ? Omega is supposed to be a mapping (in order that the push-forward of mu by Omega be well defined) but it is defined as a set of matrices? What is the power 1/2 of Omega?\\n\\n- Page 5, paragraph starting with \\\"For the second problem...\\\": Is there any guarantee that the procedure of renormalization of the linear layers described in this paragraph somehow approximates the projection of the mapping onto the set of 1-Lipschitz maps ? Furthermore, I guess that it is necessary to stress that the activation function should be 1-Lipschitz as well. \\n\\n- Page 6: \\\"It has been previously shown that [...] the Lipschitz constant of a fully connected layer is given by the spectral norm of the weights\\\". I suggest removing the words \\\"It has been previously shown that [...] \\\" since if I am not mistaken, this is just the definition of the spectral norm. \\n\\n- Page 6: \\\"in order to project back into the space of constraints we need to\\nnormalize each layer\\u2019s weights with the spectral norm\\\" I am not sure that \\\"we need to\\\" is appropriate here. It might very well happen that some of the linear layers have a spectral norm larger than 1 but the resulting NN is 1-Lipschitz.\\nI recommend replacing \\\"we need to\\\" by \\\"we sggest to\\\".\\n\\n- Should we understand that the first paragraph of section 5.1 relates to section 5.2 as well? If yes, please move it before section 5.1. If not, please provide more details on the experimental setting of section 5.2.\\n\\n- Section 5.2: The number of weights depends on the dimension d. Therefore, the computation of the gradient wrt phi requires a running time that is an increasing function of d. Furthermore, the projection onto the space of 1-Lipschitz functions is done by computing the spectral norm of a matrix of dimension qxd, where q is the number of units in the hidden layer. If k<d and power method is used for computing this norm, it involves at least k^2*d computations. It is therefore not clear why the authors insist on the fact that the \\\"the time to compute SD_phi is constant in dimension\\\", while what really counts is the overall computational cost of the method. On a related note, in Fig 4, it would be more relevant to show the running time of the algorithm and not just the time of computing SD_phi for a given phi.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
J3OUycKwz- | Mapping the Timescale Organization of Neural Language Models | [
"Hsiang-Yun Sherry Chien",
"Jinhan Zhang",
"Christopher Honey"
] | In the human brain, sequences of language input are processed within a distributed and hierarchical architecture, in which higher stages of processing encode contextual information over longer timescales. In contrast, in recurrent neural networks which perform natural language processing, we know little about how the multiple timescales of contextual information are functionally organized. Therefore, we applied tools developed in neuroscience to map the “processing timescales” of individual units within a word-level LSTM language model. This timescale-mapping method assigned long timescales to units previously found to track long-range syntactic dependencies. Additionally, the mapping revealed a small subset of the network (less than 15% of units) with long timescales and whose function had not previously been explored. We next probed the functional organization of the network by examining the relationship between the processing timescale of units and their network connectivity. We identified two classes of long-timescale units: “controller” units composed a densely interconnected subnetwork and strongly projected to the rest of the network, while “integrator” units showed the longest timescales in the network, and expressed projection profiles closer to the mean projection profile. Ablating integrator and controller units affected model performance at different positions within a sentence, suggesting distinctive functions of these two sets of units. Finally, we tested the generalization of these results to a character-level LSTM model and models with different architectures. In summary, we demonstrated a model-free technique for mapping the timescale organization in recurrent neural networks, and we applied this method to reveal the timescale and functional organization of neural language models | [
"natural language processing",
"LSTM",
"timescale",
"hierarchy",
"temporal context"
] | Accept (Poster) | https://openreview.net/pdf?id=J3OUycKwz- | https://openreview.net/forum?id=J3OUycKwz- | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"ZoT0u3PGUPu",
"O45GeHh3jv",
"aGTY_49ka9N",
"FtMC67NhA5R",
"WgKJSaF0HgV",
"KgxQg-ah92",
"B5etN1y5UiZ",
"GhMWbdgv23K",
"rrewDB1wUdj",
"_wiplxIqvRY",
"soh3AajWg9",
"1j5G50Wd6Ne",
"r5ZOKcOjz1",
"oXNV3B-nQQi",
"DJIafM4VfVk",
"gjv1D7DOf_",
"g6fkLwE-cm",
"YsssyVYMlpr",
"PQyeL7IFvVL",
"QxnC9gJQbz2"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040387094,
1606261278152,
1606130457800,
1605884693823,
1605884433173,
1605884206926,
1605884061832,
1605883991367,
1605883591476,
1605883552887,
1605883382141,
1605882963650,
1605882687538,
1605882609917,
1605882529940,
1605882036926,
1603901784550,
1603852452078,
1603796831770,
1603120974740
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3640/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3640/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3640/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3640/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3640/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3640/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3640/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3640/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3640/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3640/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3640/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3640/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3640/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3640/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3640/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3640/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3640/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3640/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3640/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper applies methods inspired by neuroscience to analyze the inner workings of LSTM language models. In particular, a simple and clever approach is proposed, in which a sentence is presented in its observed context vs. a random one. The time for a unit activation to become similar in the two contexts is used as a probe of the timescale of contextual effects. The main results are that timescales increase with layer and that there are two classes of long-timescale units with different graph-theoretical properties. The functionality of syntax-sensitive units previously identified in the literature is confirmed. Finally, the analysis is replicated for a character-level model.\\n\\nThe paper received detailed and insightful reviews, and there was a lively (but always respectful) discussion between authors and reviewers.\\n\\nOverall, the reviewers liked the topic of the paper and the overall methodology, however they had several issues with it. One of the issue pertained to the \\\"holistic\\\" approach to time in the paper, which is measured in number of tokens, rather than in terms of syntactic distance. More in general, there was a feeling that the paper was somewhat short on actual insights on the exact functional role of units in a linguistic context. The reviewer who assigned the most severe score was mostly concerned about one specific instance of this, namely the fact that the authors focus on syntax-tracking and number agreement units whose scope should not really extend across sentences. Moreover, the reviewer was surprised that the syntax-tracking units maintain information across longer distances than the number-agreement units, that should, by definition, keep track of long-distance relations.\\n\\nI am divided. I welcome work that focuses on novel qualitative and quantitative analyses of an existing model. I wished there were clearer take-home messages on how LSTMs process language, but I recognize that our knowledge of deep-learning models is very preliminary, and I am thus not surprised that the conclusions are not entirely clear. The reviewers raised important concerns, but I would not confidently claim that we know enough about the relevant units to be genuinely surprised by some of the results. For example, can we really say that number-agreement units are only limited to clause-internal agreement tracking? Couldn't it be, say, that we will discover in the future they also play a role in tracking discourse-determined pronominal number (going out on a random limb, here, of course)?\\n\\nOverall, I would like to see this at least as a poster at the conference, but I am assigning low confidence to my recommendation as I respect the reviewers' point of view.\"}",
"{\"title\": \"Thank you for your feedback!\", \"comment\": \"We have some updated results for the ablation analyses which we put in the revised manuscript (Figure 4C, Appendix A.5). In brief, we found that while controller units affected the overall probabilities assigned to the target words, integrator units played a more important role in assigning probabilities to the words in the later part of the sentences. These results further distinguish the functions of these two sets of long-timescale units. Still, we agree with the reviewer that future studies with a more thorough investigation are required to characterize the functional role of individual units, especially the controller units. We have edited the Discussion section to elaborate on this point.\\n\\nWe agree that testing the robustness of timescale organization on models with different hyperparameters is important. Due to time limit we could not thoroughly examine the robustness of timescale organization results in LSTM with different hyperparameters. However, we have looked at whether timescale organization preserves in an LSTM with fewer hidden units (i.e., 100 hidden units instead of 650, still with 2 layers). We trained the model until perplexity was reduced to ~130 (again, due to time limits), and ran the same timescale and network analyses. (Please refer to the results figure here: https://anonymous.4open.science/repository/ef3696c5-e97e-4bd8-b12a-94795a038b8c/LSTM-100units.png ) We found that an LSTM with 100 units showed a similar timescale organization pattern as LSTM with 650 units, insofar as there was a smaller set of long-timescale units compared to short-timescale units. However, the longest timescale of LSTM with 100 units was shorter than the longest timescale for an LSTM with 650 units. Although we only trained this 100 unit-LSTM for a limited time, the validation error had already plateaued for several epochs. This may suggest that a smaller number of hidden units could limit the model capacity of learning long-timescale information. Again, we agree with the reviewer that this is an interesting and important question to be more thoroughly explored in the future. \\n\\nAlso, for your previous question regarding Unit 823 in CLSTM, we have checked the activation difference curve of Unit 823 in a different context condition, where we segment the context and shared input in the middle of a sentence (at 100th character) instead of at the conjunction. We found that the sharp increase no longer exists (Please refer to the figure here: https://anonymous.4open.science/repository/ef3696c5-e97e-4bd8-b12a-94795a038b8c/unit823Check.png). Therefore, it is possible that the pattern is indeed due to this unit\\u2019s sensitivity to something like the \\u201cstart of phrase\\u201d or \\u201cstart of clause\\u201d or \\u201csyntactic head\\u201d information which could occur following the \\u201c, and\\u201d conjunction, as we previously speculated.\"}",
"{\"title\": \"The authors put efforts in revising and extending their analyses, but some points are still unclear\", \"comment\": \"I appreciate the efforts made by the authors to address most of the issues raised during the review process.\\n\\nI think that the additional analyses (e.g., the ablation study and the simulations with \\u201ccontrolled\\u201d random context) shed some light on the computational role of some unit types, but further (maybe future) investigations are required to more fully characterise the functional role of the integrator and controller units.\\n\\nThe comparison with the GRU model is useful, though somewhat limited given that that model was trained sub-optimally. Moreover, I would have liked to see some explorations about few critical hyperparameters of the LSTM (most importantly, number of hidden units) in order to better evaluate the robustness of the results.\"}",
"{\"title\": \"Official response to Reviewer 3 (cont.)\", \"comment\": [\"*Content*\", \"*I would say that the conclusion that \\\"Overall, prior works suggests that a small subset of units track long-range dependencies\\\" is rather overstated: Lakretz et al found that the units representing long distance number information were sparse, but this does not imply that long range information in general is represented sparsely. Their method also focusses quite exclusively on finding sparsely distributed properties, as more distributed properties cannot be found with ablation. Furthermore, this is just one study, focusing on one syntactic aspect. I would suggest to rephrase this a bit.*\", \"We agree with the reviewer. We will rephrase this sentence in our revised manuscript and will emphasize that while Lakretz et al. showed that number and syntax information were sparse, it remained unknown whether long-range context representations were sparse in general.\", \"*Lakretz et al actually identified several syntax units, but only one of them was interpretable.*\", \"Thank you for mentioning this; we will add this information into the manuscript.\", \"*I find it a bit confusing that in 3.2, second paragraph, you first talk about comparing cell state activation, then say that you compare hidden state activations and then talk again about the cell state activation*\", \"Thank you for pointing this out. We will revise the manuscript to make it consistent.\", \"*Figure 1 C & D: I don't think these figures add much to the paper, for the following reasons i) They show only individual units and no average, making it difficult to interpret the values ii) while, as pointed out in 5.1, the rate of decay is the most important, the cut-off point is not indicated in the figure, which puts a stress on irrelevant aspects: the actual difference between the two lines.*\", \"Thank you for the suggestion! We will rearrange the figures in our revised manuscript.\", \"*I would appreciate to have Figure A.1 in the main text, it is important for the story.*\", \"Thank you for pointing this out. We will rearrange the figures in the main text and appendix accordingly.\"]}",
"{\"title\": \"Official response to Reviewer 3 (cont.)\", \"comment\": \"- *What is the difference between character and word-level models in terms of expectations (we'd expect there to be an additional level of time-hierarchy, perhaps?)*\\n\\nIn terms of expectations between word-level and character-level models, under the hypothesis that character-level models can achieve similar performance as word-level models (as reported in Hahn et al. 2019), one should expect to see character-level models have units with longer timescales (measured at the scale of \\u201ccharacter\\u201d tokens, not \\u201cword\\u201d tokens) than word-level models. This would be necessary to allow character level models to integrate not only word-level but also sentence-level information. Indeed, we discovered units with timescales up to 50 tokens in the character-level model, which are much longer than the longest timescale of any unit in word-level model (~20 tokens). Still, this is just a rough comparison since the word-level and character-level model we evaluated in the study were trained with different architecture, and the training corpora (Wikipedia dataset) were preprocessed differently as well. \\n\\nIn Figure 2, we do not observe any obvious two-scale structure in the character-level LSTM, with a subset of nodes integrating characters at the word scale, and a separate set of characters integrating word-level information into larger structures. However, this is an interesting proposal which could guide future work \\u2013 for example, the prior context could be manipulated at a much finer grain in the LSTM, which could reveal a gradation of units that focus on within-word contextual integration. Since we focused on Layer 2 of the LSTM models, it may be more fruitful to conduct such an intra-word analysis in Layer 1 of the model, which has a shorter timescale overall.\\n\\n- *How do assessing activation differences and correlations differ in terms of conclusions?*\\n\\nAll of our analyses of individual unit timescales (i.e. via activation differences) were performed within Layer 2 of the LSTM model. All of our analyses of correlation were applied at the level of an entire LSTM layer (e.g. layer 1 or layer 2). Thus, the correlation-based analyses and activation-based analyses were not applied to address the same questions in this paper.\\n\\nIn general, one would expect that a higher cross-context correlation would correspond to lower activation difference across contexts, indicating less representational change. Thus, within layer 2, if we manipulated the network so as to reduce the timescales inferred from the activation patterns of individual units, we would also expect this effect to show up in the aggregate and would reduce the timescale inferred from measuring the state of the entire network-layer using the correlation approach. \\n\\nWe hope that the text above clarifies the relationship. We have adjusted the Methods text to emphasize that the unit-level activation difference analyses were only applied in Layer 2.\\n\\n\\n- *Lastly, there are a few unsupported claims, the most important of which that their method recovers the previously discovered units of Lakretz et al, while (as far as I understand), they actually only use their method to analyse those neurons, but did not find them independently.* \\n\\n\\nThank you for raising this point. We have changed the wording in the abstract, so that, instead of saying that we \\u201crecovered\\u201d the units, we say that our method was \\u201cvalidated against\\u201d these units, to make clear that these units were discovered by a different approach, and served as a reference.\\n\\nThe main focus of the paper is to propose a model-agnostic approach to measure timescales of all units in the network. From this perspective, it was valuable to be able to validate our method by showing that the syntax unit and number units (identified by Lakretz et al) exhibited relatively long timescales. Our goal in this paper is to emphasize the overall timescale organization of the network, and the existence of other long-timescale units.\\n\\nIf possible, we would appreciate if the reviewer could point out any other specific sentences that contain unsupported claims, so that we can address them carefully.\\n\\n- *Suggestions/comments for authors Typographic:*\\n\\nThank you for the suggestions regarding the format and typos in the manuscript! We will revise the manuscript carefully based on the suggestions and comments.\"}",
"{\"title\": \"Official response to Reviewer 3 (cont.)\", \"comment\": \"- *While, as I said before, I think it is great that the authors try to use methods from neuroscience into the field, I do think that in this case the main method they propose is only very marginally different from earlier work (in particular Khandelwal et al. Perhaps it would make more sense to put a bit more stress on the rest of the methods as well (btw, also Lakretz et al do connectivity analysis).*\\n\\n\\nAlthough we were certainly inspired by the work of Khandelwal et al. and by Lakretz et al., our work has goals and findings that are clearly distinct from both. \\n\\nKhandelwal et al. used a model-agnostic method to measure how much context is used by the LSTM as a whole; they measured how the overall model performance (measured by loss and perplexity) was affected when the given context was limited or scrambled. Khandelwal did not seek to understand the variation of timescales within different components of the model architecture, or the information flow between the different components.\\n\\nLakretz et al. were interested in the contextual representations of individual units within the LSTM. Their approach was not model-agnostic; instead, their context measurements related to specific functions (e.g. tracking of number). Therefore, they did not employ a context-scrambling procedure, and did not set out to map the overall profile of contextual processing across the model architecture. \\n\\nIn the present paper, we set out to map how much context is encoded by individual units in LSTMs, and to understand the flow of information between units of shorter and longer timescales. Therefore, we proposed a method for mapping the timescales of context dependence in individual units. Further, we explored the relationship between the mapped timescale of each unit and its role in the LSTM network structure. Neither Khandelwal et al. nor Lakretz et al. examined the large-scale functional architecture of the network. Thus, for example, these prior studies did not characterize how many nodes in the LSTM engaged in relatively context-free processing, nor how many different nodes tracked long-range context dependencies. We were certainly inspired by Khandelwal et al. and by Lakretz et al. (as well as by recent developments in neuroscience); at the same time, we hope it is clear that our goals and findings are distinct.\\n\\nWe have revised the Introduction to make these distinctions clearer.\\n\\n- *The results are a bit underexplained, and understanding them requires many back and forths to the appendix. I would have appreciated a bit more motivated interpretation of several aspects. For instance: why is there such a large difference in activation differences in different units in the \\\"pre-shared segment\\\" part, and is this related to the half-time (it seems so from the plots)?*\\n\\nThank you for the questions. As another reviewer also suggests, we will reorganize the Appendix to make it easier to understand. We will also add some of these points below into the manuscript. Below are our responses to some of the specific questions. \\n\\nThe variation across units in \\u201cactivation differences\\u201d when processing the different context segments could be driven by two sources:\\n(1) individual units have different functions in language processing; and \\n(2) the linguistic content of the \\u201cpre-shared segment\\u201d caused units to have different activations accordingly. \\n\\nThe variation across units in the height of the asymptote (\\u201cpre-shared segment\\u201d) is most likely due to the first factor. If a unit tracks a linguistic feature that varies very little across most sentences, then its activation patterns will be quite similar across many different sentences.\\n\\nWe did not observe a relationship between the inferred processing timescales (the half-time) and the activation difference during the context period (the asymptote during the context segment). We computed the correlation between these two variables across units. (Please refer to the figure here: https://anonymous.4open.science/repository/ef3696c5-e97e-4bd8-b12a-94795a038b8c/time_asym_corr.png).\", \"we_conducted_this_analysis_in_two_datasets\": \"the dataset used in our early draft manuscript (novel Anna Karenina), and the test dataset used in Gulordava et al. derived from the Wikipedia corpus used to train this LSTM model. As the figure shows, for both datasets, there is very little relationship between the activation difference in the context period and the timescale inferred for each unit. That said, in future work, we will be investigating the reasons for the differences in the heights of the asymptotes of the logistic fits.\\n\\nIt also bears mentioning that there does appear to be a relationship between the processing timescale and the asymptote on the right-hand side of the curve (i.e. the magnitude of activation difference after a unit has already processed 10 or more tokens of shared context), but this is expected, since nodes with longer processing timescales, should show a larger difference across contexts.\"}",
"{\"title\": \"Official response to Reviewer 3 (cont.)\", \"comment\": \"- *Relatedly, the authors say that their result that the syntax unit is a long distance unit, while the number units are not. This is not consistent with what they say in the related work of the section, but also not with the results reported by Lakretz et al, who hypothesise that the syntax units represent the depth of the syntactic dependency. This is something that changes with every new incoming word, whereas the number units are the ones that have to keep their activation constant across time.*\\n\\nThank you for raising this important issue. Indeed, we can distinguish two ways of measuring \\u201cdistance\\u201d between the current input and its prior context. On the one hand, distance can be measured with reference to an abstract language structure (a \\u201cfunctional\\u201d type of distance). On the other hand, distance can be measured with the a simple count of the number of \\u201ctokens\\u201d (an \\u201cimplementation\\u201d type distance). We feel that both notions of distance are valuable for understanding how LSTMs represent and process linguistic information. \\n\\nWe agree with the reviewer that if the syntax unit is tracking the depth of syntactic hierarchy, as Lakretz et al. hypothesized, then its timescale of context dependence should be flexible and? based on the syntactic structure of a sentence. \\n\\nConversely, one could also imagine units whose maintenance of information varies explicitly as a function of token distance, because, after all, recurrent neural networks face the challenge that prior context must be passed forward timestep-to-timestep. The need to preserve context from token to token is why the vanishing and exploding gradients problem become so salient for long-range dependencies.\\n\\nIn this paper, we demonstrated a model-free method for measuring the \\u201caverage timescales\\u201d of individual units to aid in understanding the functional organization of the system. While other methods can certainly be used to measure timescales under more carefully controlled contexts, or to test specific functional models, we feel that the method we used has the following advantages: (i) it can be applied to a wide range of architectures, corpora and languages in a manner that is agnostic to the functional structure of the language; (ii) it maintains a connection to the implementation-level constraints faced by the LSTM by measuring the token-level distance.\\n\\nThe syntax unit and number units are both medium-to-long timescale units relative to other units (Figure 2A) when the timescale is measured as the average number of tokens of prior context that affect the current response. Although it is beyond the scope of this project, a fascinating question for future work would be to examine the relationship between the timescale map measured in \\u201ctokens\\u201d as presented here, and the timescales that would be derived based on more functional metrics such as syntactic distance.\"}",
"{\"title\": \"Official response to Reviewer 3\", \"comment\": \"We thank the reviewer for acknowledging the potential contribution of this paper, including the importance of transferring methods from brain science to understand neural network AI models, and the importance of analyzing timescales in neural language models.\", \"here_are_the_point_by_point_responses_to_the_concerns_raised_by_the_reviewer\": \"- *My main concern is that there seems to be a mismatch between the \\\"language time scales\\\" on which the authors operate: their experiment is designed to investigate the impact of extra-sentential context, but the Lakretz et al results they keep coming back to concern syntactic phenomena that are only relevant within a sentence, which is a different scale. In other words, the units found by the authors of this paper are long-distance when it comes to integrating context, but the syntax and number units found by Lakretz et al are not really related to that: they model relationships within sentences. Theoretically speaking, they should be reset at the beginning of every new sentence and they should thus be completely independent from the content. That the authors find this to be untrue is interesting, but inconsistent with what Lakretz et al describe these unit do. Since this is not addressed at all in the paper, it makes the results in general a bit difficult to interpret.*\\n\\nWe apologize for the confusion here. Crucially, our work only examines phenomena within an individual sentence, just as in the Lakretz study. The reviewer is correct that context representations are \\u2018reset\\u2019 at sentence boundaries in the Gulordava et al. model (see below, where we confirm this in our own data). For this reason in an early draft of our manuscript, we had only analyzed single sentences which combined two distinct sub-sentences in the following way, e.g.:\\n\\n\\u201cThe boy kicked the ball, and the girl caught it.\\u201d\\n\\nSince the segment before the conjunction \\u201c, and\\u201d can be read as a self-contained sentence, our original paper contained text such as : \\u201cthe preceding sentence differed across the two conditions\\u201d (in Section 3.2). However, to be clear, these preceding segments were always part of the same sentence. Thus, in all of the data reported in our paper, we only examined context effects within a single sentence. We apologize again for the confusion. For improved clarity, we have now revised the manuscript so that the word \\u201csegment\\u201d is used consistently throughout to refer to the prior context segment and to the shared input segment.\\n\\nConsistent with the reviewer\\u2019s statement, we did find that the context representation of units was \\u201creset\\u201d at sentence boundaries. To demonstrate this, we examined the timescales of each unit when the context segment and shared input segment were separated by a \\u201cfull stop\\u201d symbol, which signals the end of a sentence. (Please refer to the result figure here: https://anonymous.4open.science/repository/ef3696c5-e97e-4bd8-b12a-94795a038b8c/fullstop_reset.png). When the first and second segments were separated \\u201cfull stop\\u201d symbol, we found that the timescales inferred for most of the units became extremely short, compared to when the first and second segments were separated with a \\u201ccomma\\u201d symbol as in our original paper. This \\u201creset\\u201d phenomenon at the beginning of every new sentence is consistent with what the reviewer predicts and with the results of Lakretz et al.\"}",
"{\"title\": \"Official response to Reviewer 1 (cont.)\", \"comment\": \"- *Pg. 3: there is a typo regarding the size of the output layer (5,0000)*\\n\\nThank you for the correction. We will revise the manuscript accordingly.\\n\\n\\n- *In Fig. A1, error bars would help in better understanding the actual difference between the curves.*\\n\\nThank you for the suggestion. We will add error bars onto the hidden state correlation in the revised manuscript.\\n\\n\\n- *In order to improve reproducibility, it would be very helpful to share the source code used for these analyses.*\\n\\nWe agree with the reviewer that code sharing is important. We did not do this in the early version of manuscript due to time limit. We will add an anonymous link for sharing the code used for this study in the revised manuscript.\"}",
"{\"title\": \"Official response to Reviewer 1 (cont.)\", \"comment\": \"- *Other comments: Why did the author choose to test the model on a different corpus (Anna Karenina novel) rather than considering a test set from the same corpus from which the training set was derived? The Tolstoy book might have a quite different linguistic structure from that of the corpora used to train the LSTMs.*\\n\\nWe agree with the reviewer\\u2019s point that the two corpora have different linguistic structure. We have re-analyzed the timescales using the test set derived from the Wikipedia training corpora that the model was trained on. We have also increased the sample size of test sentences to 500. We found the timescales measurements were highly correlated across the two corpora (r = 0.82), indicating that our results are not specific to the particular corpus we originally tested. We will revise the manuscript to add our new analyses using this dataset.\\n\\n- *It might be informative to also include a third condition in-between \\u201cIntact\\u201d and \\u201cRandom\\u201d context, where the same context words are maintained with scrambled order. This would allow to better understand the role of individual words in shaping context representation and activating the LSTM units.*\\n\\nThank you for the excellent suggestion. We agree that this analysis would be useful to understand the how the context structure shapes the representation of each units. We conducted an analysis similar to that of Khandelwal et al. to map units\\u2019 timescales using the shuffled context vs. intact context, and then compared the results with our original analysis in which we replaced the prior context. (Please refer to the figure here: https://anonymous.4open.science/repository/ef3696c5-e97e-4bd8-b12a-94795a038b8c/timescale_corr_middle_shuffle.png). \\n\\nWe observed that most long timescale units (timescale > 10 words) showed longer timescales when the context was replaced compared to when it was shuffled. One unit (Unit 665) showed a larger effect than all others, with its timescale estimate decreasing from more than 20 words down to about 6 word for the shuffled context. This phenomenon indicates that many of the long-timescale units are preserving context in a way that depends on the presence of a coherent structure (e.g. grammaticality of the prior context) and that there is a very small subset of long-timescale units that almost entirely resets their context sensitivity for shuffled text. We did not observe any units which showed longer timescales for shuffled text than for replaced text.\\n\\nAll in all, this analysis indicates that for most units the individual words do indeed play a large role in shaping the context representation (because overall the timescale patterns are highly correlated across the shuffling method and the replacement method). At the same time, for the units with the longest processing timescales, the context representations were preserved much longer when that prior context is composed of coherently structured language.\\n\\nWe will revise the manuscript to include these new analyses.\\n\\n- *In Fig. 1D, it is interesting to note that the Unit 823 (green line) actually exhibits a sharp increase in difference after the shared segment starts. Do the authors have a possible explanation for this kind of phenomena? Was it observed systematically in other units?*\\n\\nThank you for raising the question. As shown in Figure 2B, some units in CLSTM seem to exhibit such phenomenon at the beginning of the shared segments (i.e., starting at \\u201c, and\\u201d in our analyses). We speculate that this might be due to the units\\u2019 sensitivity to \\u201cstart of phrase\\u201d / \\u201cstart of clause\\u201d / \\u201csyntactic head\\u201d which would be revealed based on our method to segment context and shared input segments in the current paper. We are checking this by using different segmentation method (Please see our reply to the first point raised by Reviewer 2) and seeing if this phenomenon in Unit 823 is preserved.\\n\\n- *In relation to the results shown in Fig. 3A, I did not understand how the thresholds and parameters for the k-core analysis were chosen.*\\n\\nThank you for raising this. We will revise the manuscript to better explain the settings of these analyses. The threshold chosen here does not really change the results shown in Fig. 3A that units with longer timescales tend to have more strong projections: Using |z| > 3 as threshold we obtained corr(timescale, projections) = 0.30, p<0.001; using |z| > 4 we obtained corr(timescale, projections) = 0.35, p<0.001; using |z| > 5 we obtained corr(timescale, projections) = 0.29, p<0.001. For the k-core analysis, we chose top n weight values form the weight matrices as edges to construct the network, where n is the number of identified strong projections.\"}",
"{\"title\": \"Official response to Reviewer 1\", \"comment\": \"Thank you for your positive comments! Please see our responses to the points you raised below:\\n\\n- *I think that further analyses are required in order to better clarify some important aspects. In particular, I think that ablation studies should be performed in order to better identify the functional role of the \\u201ccontroller\\u201d and \\u201cintegrator\\u201d units, whose actual functional role remains a bit speculative (and mostly based on structural / connectivity information). It would also strengthen the paper to have some more controlled simulations, where the contextual information is defined according to specific linguistic constraints, in order to better characterize what the target units are actually encoding. Indeed, as also noted by the authors almost \\u201call the long timescale units are of unknown function\\u201d.*\\n\\nThank you for the positive comments and valuable suggestions. We agree that it is important to further investigate the functional roles of the units we identified using the current analyses. Each putative \\u201ccontroller unit\\u201d may serve a different functions due to its distinctive positions in the MDS space (Figure 3D). It will requiring targeted linguistic experiments to probe each functions. Nonetheless, we performed a preliminary group ablation analysis to look at how ablating the controller units influences model performance, relative to the intact original model, and relative to the ablation of a random set of units. \\n\\nWe evaluated model performance by examining the difference of probabilities assigned to the target words in the ablated model and in the intact model, and comparing the conditions when controller units (N=10)/integrator units (N=5) were ablated vs. same number of random units from layer 2were ablated. We used the test corpus used by Gulordava et al. and measured the average performance of each model across 100 text-batches, randomly sampled from the Wikipedia test dataset. Each text-batch was composed of 1000 tokens start from the beginning of a sentence. \\n\\nWe found that ablating controller units reduced the probabilities the model assigned to the target words, more so than ablating random units (controller vs. random across 100 text batches: Cohen\\u2019s d = 4.96, t=-35.12, p<0.001). Ablating integrator units did not show significant difference compared to ablating random units (Cohen\\u2019s d = 0.8, t=1.77, p=0.09). It may be that the integrator units mostly influence the model performance on predicting tokens in cases where long-range information is especially relevant (e.g. in the later portions of long clauses). Overall, these abalation results support the non-trivial functional role for the controller units, which are only 10 amongst 650 units in total. (Please refer to the figures here: https://anonymous.4open.science/repository/ef3696c5-e97e-4bd8-b12a-94795a038b8c/ablation_analysis.png)\\n\\nHowever, as we mentioned, this only provides a rough sense of the importance of these units compared to other units in the network. Also, as Reviewer 4 commented, ablation analysis has its limitation, and one should be careful interpreting the results (https://doi.org/10.1007/s42113-020-00081-z). Analyses targeted on specific linguistic properties will be required to further understand the exact functional role of each unit in the controller network, in the vein of Lakretz et al. (2019), and we hope that the timescale and network mapping methods we have introduced here can help to target future investigations of this kind.\\n\\n- *Finally, I think that it would be important to establish whether these findings are generally applicable to LSTM models, regardless of the specific architecture under investigation (e.g., What happens if we force the LSTM to rely on fewer units? Does the hierarchical organization of the context improve by adding more layers?).*\\n\\nReviewer 2 has also raised the concern of whether the results are applicable to other architectures. To address this concern, we trained a GRU language model, mapped the timescales of each layer, mapped the timescales of individual units, and explored the relationship between timescale and connectivity patterns. We found similarities and differences regarding the results between GRU and LSTM; the distribution of timescales across nodes was similar across the GRU model and the LSTM. The GRU did not show the same timescale-connectivity relationship as the LSTM. These findings remain preliminary because of the limited time available for training the GRU model. For further details, please refer to our response to the third point raised by Reviewer 2.\"}",
"{\"title\": \"Official response to Reviewer 2 (cont.)\", \"comment\": \"- *Minor points: In Figure 3, it would be helpful if the absolute timescale was labeled in all plots rather than the rank of the unit or the \\u201cnormalized timescale\\u201d. The absolute timescale seems much more meaningful to me (and the units can of course still be ranked, just the axis labels changed or augmented).*\\n\\nBecause our interpolation FWHM method can only map timescales as an integer number of words, it would be difficult to visualize the exact timescale for every unit in Figure 3A : many units have the same timescale and would thus overlap visually. We will add the scatter plot of timescale variation as shown in: https://anonymous.4open.science/repository/ef3696c5-e97e-4bd8-b12a-94795a038b8c/timescale_corr_commaAnd_middle.png into Appendix for readers who would like to see the exact timescale of individual units.\\n\\n- *I thought the information in Appendix A.1 that describes the basic stimuli used should be added to the main text.*\\n\\nThank you for your suggestion. We will add this information into main text. \\n\\n- *What matrix was MDS performed on? The thresholded connection matrix?*\\n\\nThe matrix the MDS was performed on is the hidden-to-gate projection patterns which was obtained from the original weight matrices in LSTM (concatenating W_hi and W_hf ).\\n\\n- *Figure 3C didn\\u2019t seem to convey any useful information. Am I missing something?*\\n\\nThank you for the comment. We will move this figure to Appendix. \\n\\n- *Why are there two bias terms in equations 4 and 5?*\\n\\nThank you for pointing this out. We will revise the manuscript.\\n\\n- *Equations 4 and 5 have do not have the variable W_i and W_h mentioned above, only W_ii, W_if, W_hi, W_hf. I personally think it would be clearer to give the weights for the inputs and hidden units different names rather than different subscripts.*\\n\\nThank you for the suggestion. We agree that it would be clearer to use different names and we will revise the manuscript accordingly.\\n\\n- *In Fig 1B, having a panel that plots the difference between intact and random context, might make the point clearer. Presumably this would show that the differences become smaller over time which is not obvious from the plot.*\\n\\nThank you for your suggestion. We will revise Figure 1 to make this clearer.\"}",
"{\"title\": \"Official response to Reviewer 2 (cont.)\", \"comment\": \"- *It would be interesting to know how dependent these findings are on the model\\u2019s architecture. Would similar results be found for a Transformer or a simpler GRU-style RNN?*\\n\\nThank you for the suggestion. We agree it would be interesting to use this model-free approach to explore the timescale organization of language models with a different architecture. \\n\\nTo explore how architecture may influence the findings, we trained a word-level GRU language model using the same settings as used in Gulordava et al., including the same Wikipedia training corpus, the same loss function (i.e. cross-entropy loss), and the same hyperparameters. Specifically, the GRU model also has two layers with 650 hidden units in each layer as the LSTM model we analyzed in the current study. \\n\\nUnfortunately, due to limitations of time and computational resources, we had to stop training the model after 48 hours, at which point the GRU achieved a test perplexity of 349.39. This perplexity is much higher than the LSTM model reported in Gulordava et al. (perplexity = 52.1 in the English corpora) because they selected the best models after training for 40 epochs, whereas our testing GRU model was only trained for ~10.5 epochs. \\n\\nWe then analyzed the timescale of hidden units using the same method as was used for analyzing the LSTMs, and using the test data derived from the training Wikipedia corpus (Please refer to the result figure here: https://anonymous.4open.science/repository/ef3696c5-e97e-4bd8-b12a-94795a038b8c/GRU_timescale_wikitest.png). \\n\\nOverall, we found that the majority of the units in the GRU also showed shorter timescales, similar to the Gulordova LSTM model, but that the relationship between timescales of each GRU unit and its projection patterns appeared different to the relationship observed for the LSTM. In particular: (1) similar to the word level LSTM, the second layer of the GRU model was more sensitive to prior context than the first layer. (2) The overall distribution of timescales in the GRU was similar to LSTM, although the GRU showed a right-skewed distribution with a larger proportion of short-timescale units relative to the word-level LSTM (Panel B and C). Of course, the differences between GRU and LSTM models may arise because of the limited training time for the GRU, and so can only be taken as provisional.\\n\\nWe also performed the timescale vs. network connectivity analyses on this GRU model (Please refer to result figures here: https://anonymous.4open.science/repository/ef3696c5-e97e-4bd8-b12a-94795a038b8c/GRU_connectivity.png). Because the update of hidden states in GRU are controlled by the update gate, we chose to analyze hidden-to-update gate weight matrix. In contrast to the LSTM models, the GRU that we trained did not show the pattern that units with longer timescales exhibited more strong projections (Panel A). Moreover, when using k-core analysis to identify subunits of interconnected high-degree units, the core network contained many units with long to short timescales. Interestingly, when we visualize the position of the k-core units in the MDS space, they tended to locate at the edge of the space, similar to what we found in LSTM. This indicates that the k-core units have distinctive profiles, distant from one another and from other units in the network.\\n\\nAlthough the similarities/differences between LSTM and GRU here are intriguing, we should keep in mind that (1) the perplexity of this GRU is much higher than the LSTM, due to limitations in training time and tuning, and that (2) comparing the LSTM and GRU connection patterns is not straightforward, as the overall distribution of weights is different, so further work is required to determine comparable thresholds for \\u201cstrong\\u201d projections and \\u201chigh-degree units\\u201d in each case. As we noted in the manuscript and above, the connectivity results are exploratory; however, we believe that the GRU analysis shows how these methods can be extended to map and compare the functional organization of language models of different architectures.\"}",
"{\"title\": \"Official response to Reviewer 2 (cont.)\", \"comment\": \"- *None of the steps in the graph analyses seemed particularly natural or well-motivated to me. Why were the graph edges thresholded at z>5 and why was k-core analysis performed? I find it hard to make sense of what this analysis tells us about how language information is processed. Is there some reason why medium timescale \\u201ccontroller\\u201d units and long-timescale \\u201cintegrator\\u201d units should help with language processing? If these results are purely exploratory and lack a clear interpretation, then perhaps the authors could help the reader by explaining the thought process behind the exploration. Perhaps starting with the MDS plot would be useful rather than the k-core analysis, because the MDS plot clearly shows some interesting structure.*\\n\\nThank you for your comments. We agree with the reviewer that our analyses combining network connectivity with unit timescales are exploratory. We will revise the manuscript to make clearer that these analyses are motivated by a (general) hypothesis about the functional organization of the LSTM system, inspired by the hierarchical structure of the human brain. \\n\\nIn the human brain, timescales of processing and anatomical connectivity are related:\\nMore sensory regions (near the periphery of the cortical network) tend to have shorter timescales and lower degree; while higher-order regions (near the interior core of the cortical network) tend to have longer-timescales and higher degree.\\n\\nMoreover, human brain networks have a core periphery structure, in which a relatively small number of \\u201chigher order\\u201d and high-degree regions (in prefrontal cortex, in default-mode regions and in so-called \\u201climbic cortex\\u201d) maintain a large number of connections with one another, and exert a powerful controlling role over large-scale cortical dynamics. This notion of a network core is longstanding \\u2013 see Figure 2 in Mesulam (1998) (https://doi.org/10.1093/brain/121.6.1013) or Figure 5 in Hagmann et al. (2008) (https://doi.org/10.1371/journal.pbio.0060159), for variations of this idea.\\n\\nTherefore, we were interested to test the (tentative) hypothesis that higher degree nodes in LSTMs might have longer timescales, and lower degree nodes might have shorter timescales. This is the why we pursue these particular sets of network analyses, examining the k-cores, coupling maps, and their relationship to the timescales. Since language processing involves lower to higher-level cortices in the brain, understanding the similarities/differences of functional organization between the brain and language models should help us understand how language information is processed in language models.\\n\\nWhat do the \\u201cintegrator\\u201d and \\u201ccontroller\\u201d units tell us about language processing? As this is our first effort to construct the functional maps, it is too early to make specific statements. We feel that the more important contribution is that the mapping tools applied here enable us to narrow down the range of candidate nodes that could be in involved in long-range dependency tracking, which presumably is associated with more high-level grammatical and discourse level-processing. The aim of this mapping procedure is to serve as a guide to future, more targeted investigations.\\n\\nIn terms of the details of how we chose the threshold, we would like to clarify that the threshold |z| > 5 was only used to investigate the relationship between number of strong projections and the timescale of individual units (shown in Figure 3A), but not used to construct graph edges. Furthermore, the threshold does not really affect the current results that units with longer-timescales tend to have more strong projections: we observed significant correlation between timescale and number of strong projections using multiple thresholds:|z|>3 (r=0.3), |z|>4 (r = 0.35) and |z|>5 (r =0.29). Also, we got significant correlations when varying the thresholds in the character-level LSTM model (|z|>5, r = 0.24; |z|>4, r = 0.30; |z|>3, r = 0.36) \\n\\nOn the other hand, the graph edges for constructing the network shown in Figure 3C were identified using the top 258 weight values in the LSTM hidden-to-gate weight matrix. These weights were not thresholded. \\n\\nFinally, thank you for the suggestions regarding the organization of the results. Since the interesting patterns revealed by the MDS come from the distinctive positions of controller and integrator units, we think it might be easier for the readers if we explain how the controller units were identified first. However, we completely understand your concern and will revise the manuscript to describe our motivation and methods regarding the k-core analysis more clearly.\"}",
"{\"title\": \"Official response to Reviewer 2\", \"comment\": \"Thank you for your valuable feedback! Please see our point-by-point response below:\\n\\n- *It\\u2019s not clear to me if the notion of time is a meaningful one in a language model. For example, the duration of contextual effects on a unit that codes syntactic number will presumably be highly variable and depend upon the details of the particular sentence being encoded. Thus a natural question is how variable are these timescales from moment-to-moment? What\\u2019s being plotted is the average across a bunch of sentences, segmented at a particular moment (a conjunction). How robust are these results if one examines a different point in a sentence? Are the timescales of some units more variable than others?*\\n\\nThank you for raising the point. Indeed, the timescale of individual units could vary based on the syntactic distance, which can be different from the token distance measured in our study. As pointed out by reviewer 3, one would expect that functional units such as syntax or number units should \\u201creset\\u201d their timescale at certain points of the sentence (please refer to our response to the figures generated for Reviewer 3 regarding the reset process). \\n\\nThank you for the suggestion about testing other locations in sentences, rather than just the specific \\u201c, and\\u201d conjunction. We conducted a new analysis in which we segmented the context and shared segments at a fixed distance from the sentence onset (i.e. we pick the 10th token as the segmentation point). We found that the timescale organization was largely preserved, regardless of whether we measured the cross-context effect at the \\u201c, and\\u201d boundary [as in our original analysis] or simply before and after the 10th token [as in this new analysis]. (Please refer to the figure here: https://anonymous.4open.science/repository/ef3696c5-e97e-4bd8-b12a-94795a038b8c/timescale_corr_commaAnd_middle.png).\\n\\nAt the same time, we did find that there was a small subset of units (fewer than 10 units) whose timescales were clearly shorter following the \\u201c, and\\u201d conjunction. This is intriguing, as it suggests that some units were \\u201cresetting\\u201d their context following the \\u201c, and\\u201d conjunction. In future work, we will also seek to identify whether these units. \\n\\nIn all, we agree with the reviewer that the timescales measured using \\u201ctokens\\u201d may be flexible, and can vary according to the syntactic context, for example. To address how syntactic distance affects context encoding in individual unit, carefully controlled context conditions are needed, which is beyond the scope of the current study. However, we would like to stress the importance of token distance for RNNs in general, since it serves as an important parameter for RNNs to predict the next token in any kind of sequences, before it learns to flexibly integrate information based on syntactic distance in language specifically.\\n\\nWe will revise the manuscript to include the analysis suggested by the reviewer, which suggests that the timescales measured at the level of tokens are robust across different choices of the location of the prior context segment.\"}",
"{\"title\": \"Official response to Reviewer 4\", \"comment\": \"First of all, thank you for your positive comments on the paper. Please see our response to the specific questions/comments below:\\n\\n- *I would have liked to have a discussion with respect to what the hierarchical organisation is due to. Is this merely a repercussion of the connectivity, for example? What do the authors think?*\\n\\nIs the hierarchical timescale phenomenon a direct result of the connectivity? At the level of layers (i.e. the fact that Layer 1 showed a shorter timescale overall, relative to Layer 2), this effect is likely due to connectivity. If unit B in Layer 2 receives an input from unit A in Layer 1, and if Unit A is sensitive to changes in the input from N words earlier, then that context-sensitive activation is passed as an input to Unit B. As a result, Unit B will most likely show at least some sensitivity to changes in the input from N words earlier. Thus, on average, when an entire population of units is downstream from another population, we should expect the downstream layer to have longer timescales, on average. \\n\\nThe link to connectivity is less straightforward for within-layer connections. Models from the neuroscience literature indicate that higher-degree nodes in dynamical systems will (under some conditions) tend to exhibit slower dynamics than lower-degree nodes (Baria et al., 2013, https://doi.org/10.1016/j.neuroimage.2013.01.072), so one might expect that higher-degree nodes (simply by virtue of changing state more slowly) should also have longer context-dependence. However, this is not always the case, as shown in the GRU model that we trained to test the generality of the timescale-connectivity findings (Please see our Response to Reviewer 2). Moreover, it is certainly theoretically possible for a small subset of very long-timescale nodes to operate through a connection bottleneck.\\n\\nWe will revise the manuscript to note these connectivity-timescale relationships, which were also of interest to Reviewer 2.\\n\\n- *In terms of work that looks at ablation (i.e., damage), it might be useful to bear in mind limitations of such work if various (seemingly, perhaps) extraneous factors are not taken into account, see: https://doi.org/10.1007/s42113-020-00081-z*\\n\\nThank you for sharing the interesting article. We agree that ablation methods have some limitations and therefore in the current study we proposed this model-free method to investigate the functional property of individual units without lesioning the model. We will cite this paper in relation to our own new ablation findings in the revised paper.\\n\\n\\n- *Minor:\\nFigures are very hard to read, is it possible to redesign them slightly to make the text bigger?*\\n\\nWe will get feedback from colleagues and attempt to improve the readability and scaling of the figure panels.\\n\\n\\n- *In LaTeX to open double quotes you need to use two backticks. Also the \\\\cite and \\\\citep commands should be used appropriately in terms of places where \\\\citep is needed as well as use of optional arguments to avoid double parentheses.*\\n\\nWe will revise the quotation and citation format. Thank you for the tips!\"}",
"{\"title\": \"Towards understanding the internal organisation of LSTMs\", \"review\": \"This paper looks at LSTMs with the intention of understanding their functional connectivity. I am not sure exactly what the relationship between the brain and LSTMs is being assumed or proposed herein \\u2014 however I understand the need to understand complex neural networks regardless of their relationship to biological systems.\\n\\nI would have liked to have a discussion with respect to what the hierarchical organisation is due to. Is this merely a repercussion of the connectivity, for example? What do the authors think?\\n \\nIn terms of work that looks at ablation (i.e., damage), it might be useful to bear in mind limitations of such work if various (seemingly, perhaps) extraneous factors are not taken into account, see: https://doi.org/10.1007/s42113-020-00081-z\\n\\nI think this paper can be polished to the level of a solidly good paper if the authors can sketch out a bit more their rationale and syllogisms with respect to my above questions.\", \"minor\": [\"Figures are very hard to read, is it possible to redesign them slightly to make the text bigger?\", \"In LaTeX to open double quotes you need to use two backticks. Also the \\\\cite and \\\\citep commands should be used appropriately in terms of places where \\\\citep is needed as well as use of optional arguments to avoid double parentheses.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Exploratory analyses of timescales in neural language models\", \"review\": \"This paper applies tools from neuroscience to understand how language models integrate across time. The basic approach is to present a phrase, preceded by two different context phrases: one that is natural (i.e. the phrase that actually preceded it in the corpus) and one that is randomly selected. The authors then measure how long it takes for the unit activations to become similar for the two different contexts, which provides a measure for how long the context impacts the representation. They find that (1) timescales increase at later layers of the language model (2) that only a small fraction of units exhibit long timescales (3) that long/medium-timescale units appear to come in two forms which they try and characterize using graph-style analyses.\\n\\n--\", \"pros\": \"How language models integrate across time is clearly important, and this paper describes interesting first steps in characterizing the analysis of time using relevant tools from the neuroscience literature. \\n\\nThe method presented is simple and broadly applicable. \\n\\nThe graph-style results seem intriguing if a little hard to make sense of. I also think that the sparsity of the long-timescale units is cool and interesting.\\n\\n--\", \"limitations_and_questions\": \"1.\\tIt\\u2019s not clear to me if the notion of time is a meaningful one in a language model. For example, the duration of contextual effects on a unit that codes syntactic number will presumably be highly variable and depend upon the details of the particular sentence being encoded. Thus a natural question is how variable are these timescales from moment-to-moment? What\\u2019s being plotted is the average across a bunch of sentences, segmented at a particular moment (a conjunction). How robust are these results if one examines a different point in a sentence? Are the timescales of some units more variable than others? -- Update: the authors have repeated their analysis for a different sentence point (after the 10th word) and report similar results. This analysis is helpful, though of course the 10th word is not a very principled break point, and there presumably is a lot of variation in timescales that are being averaged across. I continue to wonder how meaningful the notion of an absolute timescale is. -- \\n\\n2.\\tNone of the steps in the graph analyses seemed particularly natural or well-motivated to me. Why were the graph edges thresholded at z>5 and why was k-core analysis performed? I find it hard to make sense of what this analysis tells us about how language information is processed. Is there some reason why medium timescale \\u201ccontroller\\u201d units and long-timescale \\u201cintegrator\\u201d units should help with language processing? If these results are purely exploratory and lack a clear interpretation, then perhaps the authors could help the reader by explaining the thought process behind the exploration. Perhaps starting with the MDS plot would be useful rather than the k-core analysis, because the MDS plot clearly shows some interesting structure. -- The authors have motivated some of their analyses by discussing brain research reporting that longer-timescale regions are more densely connected. Of course, the relationship between connectivity between large-scale brain regions and the units in a LSTM remains highly speculative. But having some motivation is helpful. --\\n\\n3.\\tIt would be interesting to know how dependent these findings are on the model\\u2019s architecture. Would similar results be found for a Transformer or a simpler GRU-style RNN? -- The authors have attempted to address this point, but with limited time were not able to train a network to a high level of performance. --\\n\\n--\", \"minor_points\": \"In Figure 4, it would be helpful if the absolute timescale was labeled in all plots rather than the rank of the unit or the \\u201cnormalized timescale\\u201d. The absolute timescale seems much more meaningful to me (and the units can of course still be ranked, just the axis labels changed or augmented). \\n\\nThe legend for Figure 4c is incorrect.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A promising approach to explore the emergent structure of LSTM models, which needs further development to shed light on emergent function\", \"review\": \"This paper explores the application of innovative methods to track the flow of linguistic information in LSTM language models. In particular, the overarching question is how contextual information might be encoded in the network at the level of single units, and how context disruption might alter the LSTM dynamics and thus impact its predictive ability.\\nThe paper is clear and it tackles an interesting question. The approach is well motivated, and the authors give a brief survey of the most recent applications of this kind of methodology in linguistics and cognitive neuroscience studies.\\nThe methodology is generally appropriate, though some details and parameters (e.g., numerical thresholds) seem to be chosen arbitrarily. Also, the analysis could be improved by applying statistical testing in order to better quantify the strength of the observed effects.\\nOverall, I think this is a nice paper, though it might be especially relevant to the linguistics community rather than to the ICLR community. Moreover, I think that further analyses are required in order to better clarify some important aspects. In particular, I think that ablation studies should be performed in order to better identify the functional role of the \\u201ccontroller\\u201d and \\u201cintegrator\\u201d units, whose actual functional role remains a bit speculative (and mostly based on structural / connectivity information). It would also strengthen the paper to have some more controlled simulations, where the contextual information is defined according to specific linguistic constraints, in order to better characterize what the target units are actually encoding. Indeed, as also noted by the authors almost \\u201call the long timescale units are of unknown function\\u201d. Finally, I think that it would be important to establish whether these findings are generally applicable to LSTM models, regardless of the specific architecture under investigation (e.g., What happens if we force the LSTM to rely on fewer units? Does the hierarchical organization of the context improve by adding more layers?).\", \"other_comments\": [\"Why did the author choose to test the model on a different corpus (Anna Karenina novel) rather than considering a test set from the same corpus from which the training set was derived? The Tolstoy book might have a quite different linguistic structure from that of the corpora used to train the LSTMs.\", \"It might be informative to also include a third condition in-between \\u201cIntact\\u201d and \\u201cRandom\\u201d context, where the same context words are maintained with scrambled order. This would allow to better understand the role of individual words in shaping context representation and activating the LSTM units.\", \"In Fig. 1D, it is interesting to note that the Unit 823 (green line) actually exhibits a sharp increase in difference after the shared segment starts. Do the authors have a possible explanation for this kind of phenomena? Was it observed systematically in other units?\", \"In relation to the results shown in Fig. 3A, I did not understand how the thresholds and parameters for the k-core analysis were chosen.\", \"Pg. 3: there is a typo regarding the size of the output layer (5,0000)\", \"In Fig. A1, error bars would help in better understanding the actual difference between the curves.\", \"In order to improve reproducibility, it would be very helpful to share the source code used for these analyses.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Great to see some methods from neuroscience applied to interpretability research for a relevant question, results and setup could be improved\", \"review\": [\"_**Update after author response**_: I think this is a very promising paper, and I am really excited about seeing techniques from neuroscience employed to answer questions about neural network models. The authors have further conducted several additional experiments after reviewer comments, which I appreciate. However, my most fundamental concern -- the mismatch between the method and the way that it is validated -- unfortunately still stands, which is why I would encourage the authors to further pursue this line of work, but recommend to reject it for ICLR.\", \"**Summary**\", \"This paper proposes to apply time-scale methods from neuroscience to investigate the timescale organisation in neural language models. More specifically, the authors test the timescale of individual units in a word- and character-level LSTM by comparing the units' activations values on the same sentence, but with different contexts. Using this method, the authors first show that the higher layers on average have longer timescales. They then, for all units, they fit a logistic function to the \\\"recovery\\\" curves and use the half-times of this curves as an indication of the time scale of these units. They test the syntax unit and two long-distance units found by Lakretz et al and show that the number units have similar time-scales, while the syntax unit have a longer time scale. Lastly, the authors analyse the connectivity between the longer time scale units and find that the units with longer processing timescales make a larger number of strong projections. Within these units, the authors identify two sets of units in the word-level LSTM: \\\"controller units\\\", that play a role in how the connectivity of the network is updated, and \\\"integrator units\\\", that instead integrate information.\", \"**Strong points**\", \"Neuroscience has long been asking questions about the brain that are very similar to the questions we now ask about neural networks, cross-pollination between these fields is extremely important, and this paper contributes to this\", \"Aside from the main technique, the paper introduces some interesting and useful methods, such as projectivity analysis and k-core analysis. I think these methods can be useful for other researchers as well\", \"Time scale analysis of LSTMs is a very relevant and interesting topic, that deserves more attention than it is currently getting\", \"*Concerns*\", \"My main concern is that there seems to be a mismatch between the \\\"language time scales\\\" on which the authors operate: their experiment is designed to investigate the impact of extra-sentential context, but the Lakretz et al results they keep coming back to concern syntactic phenomena that are only relevant *within* a sentence, which is a different scale. In other words, the units found by the authors of this paper are long-distance when it comes to integrating context, but the syntax and number units found by Lakretz et al are not really related to that: they model relationships *within* sentences. Theoretically speaking, they should be reset at the beginning of every new sentence and they should thus be completely independent from the content. That the authors find this to be untrue is interesting, but inconsistent with what Lakretz et al describe these unit do. Since this is not addressed at all in the paper, it makes the results in general a bit difficult to interpret. _**Update after author response**: In their response the authors clarified that the they have only analysed single sentences, where two distinct subsentences are combined with a conjunction. This, unfortunately, does not make a difference for the argument: whether two sentences are split by a full stop or instead concatenated with \\\"and\\\" does not make any difference for the argument above, since the subject-verb agreement relationships that the units the authors look at model do not cross these boundaries either. Furthermore, in their response the authors state that the find that the context representations of units was 'reset' at sentence boundaries, as I asked before. I appreciate that the authors did these additional experiments, but I find the result somewhat worrisome: since the units they are looking at are syntactic units that encode number across long distance subject verb relationships, they should be reset both when a new sentence starts, as well as when a new conjunct with a new relationship starts. In terms of SV relationships, there should be no difference between \\\"The boy kicked the ball and the girl caught it\\\" and \\\"The boy kicked the ball. The girl caught it.\\\" That the authors do find a difference points to a potential flaw in methodology._\", \"Relatedly, the authors say that their result that the syntax unit is a long distance unit, while the number units are not. This is not consistent with what they say in the related work of the section, but also not with the results reported by Lakretz et al, who hypothesise that the syntax units represent the depth of the syntactic dependency. This is something that changes with every new incoming word, whereas the number units are the ones that have to keep their activation constant across time.\", \"While, as I said before, I think it is great that the authors try to use methods from neuroscience into the field, I do think that in this case the main method they propose is only very marginally different from earlier work (in particular Khandelwal et al. Perhaps it would make more sense to put a bit more stress on the rest of the methods as well (btw, also Lakretz et al do connectivity analysis).\", \"The results are a bit underexplained, and understanding them requires many back and forths to the appendix. I would have appreciated a bit more motivated interpretation of several aspects. For instance: why is there such a large difference in activation differences in different units in the \\\"pre-shared segment\\\" part, and is this related to the half-time (it seems so from the plots)? What is the difference between character and word-level models in terms of expectations (we'd expect there to be an additional level of time-hierarchy, perhaps?) How do assessing activation differences and correlations differ in terms of conclusions? These things should, in my opinion, all be worked out a bit better.\", \"Lastly, there are a few unsupported claims, the most important of which that their method recovers the previously discovered units of Lakretz et al, while (as far as I understand), they actually only *use* their method to analyse those neurons, but did not find them independently. (for other suggestions and comments, see below).\", \"To summarise, while I think the idea is very nice and definitely worth working out further, I do think that some work is needed to make this a publishable paper.\", \"*Suggestions/comments for authors*\", \"_Typographic_:\", \"If you use quotes in latex, you should use different ones for left (`) and right ('), for them to appear correctly (check for instance line three in the introduction)\", \"To prevent additional spaces after abbreviations like e.g. and i.e., put a backslash: \\\"e.g.\\\\ \\\"\", \"Lerner et al --> put all references within parenthesis\", \"Introduction switches from present tense to paste tense in the last paragraph\", \"\\\"we measure the time-taken for the effect of this prior context to \\u201ddecay\\u201d (see Methods)\\\" --> I don't really understand what this means, you measure how long it takes for these changes to not be measurable anymore?\", \"Try to avoid double parethesis with abbreviations, e.g.: (WLSTM Gulordava et al. (2018)) should be: (WLSTM, Gulordava et al; 2018). You can do this with \\\\citep[text before][text after]{citation}.\", \"\\\"has an 650-dimensional\\\" --> \\\"has a 650-dimensional\\\"\", \"\\\"without fine-tuning to the novel\\\" --> I first thought this sentence was unfinished until I read back and realised that \\\"the novel\\\" is your corpus. This is a bit confusing perhaps you could rephrase.\", \"\\\"how the cell state activation differ\\\" --> \\\"how the cell state activations differ\\\"\", \"\\\"we will see that the activation difference drop quickly' --> drops quickly / see the activation difference drop quickly\", \"There are several references that were published at ACL* conferences that are listed as arxiv papers in the reference list (Lakretz et al, Gulordava et al, Khandelwal et al)\", \"_Content_\", \"I would say that the conclusion that \\\"Overall, prior works suggests that a small subset of units track long-range dependencies\\\" is rather overstated: Lakretz et al found that the units representing long distance number information were sparse, but this does not imply that long range information in general is represented sparsely. Their method also focusses quite exclusively on finding sparsely distributed properties, as more distributed properties cannot be found with ablation. Furthermore, this is just one study, focusing on one syntactic aspect. I would suggest to rephrase this a bit.\", \"Lakretz at all actually identified several syntax units, but only one of them was interpretable.\", \"I find it a bit confusing that in 3.2, second paragraph, you first talk about comparing cell state activation, then say that you compare hidden state activations and then talk again about the cell state activation\", \"Figure 1 C & D: I don't think these figures add much to the paper, for the following reasons i) They show only individual units and no average, making it difficult to interpret the values ii) while, as pointed out in 5.1, the *rate* of decay is the most important, the cut-off point is not indicated in the figure, which puts a stress on irrelevant aspects: the actual difference between the two lines.\", \"I would appreciate to have Figure A.1 in the main text, it is important for the story.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
WDVD4lUCTzU | Universal Sentence Representations Learning with Conditional Masked Language Model | [
"Ziyi Yang",
"Yinfei Yang",
"Daniel M Cer",
"Jax Law",
"Eric Darve"
] | This paper presents a novel training method, Conditional Masked Language Modeling (CMLM), to effectively learn sentence representations on large scale unlabeled corpora. CMLM integrates sentence representation learning into MLM training by conditioning on the encoded vectors of adjacent sentences. Our English CMLM model achieves state-of-the-art performance on SentEval, even outperforming models learned using (semi-)supervised signals. As a fully unsupervised learning method, CMLM can be conveniently extended to a broad range of languages and domains. We find that a multilingual CMLM model co-trained with bitext retrieval~(BR) and natural language inference~(NLI) tasks outperforms the previous state-of-the-art multilingual models by a large margin. We explore the same language bias of the learned representations, and propose a principle component based approach to remove the language identifying information from the representation while still retaining sentence semantics. | [
"multilingual representations",
"sentence embeddings"
] | Reject | https://openreview.net/pdf?id=WDVD4lUCTzU | https://openreview.net/forum?id=WDVD4lUCTzU | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"0EpVC3wT3F0",
"W0yDIwgcCIn",
"BqWIXVyTp3",
"RUu_AFkp7jK",
"4bJBmzhP0EO",
"0vG2tl203W",
"vpFoFpu2UF",
"O1lJjJzcIbi",
"FdjrveSXho",
"geo0tdb3NW-",
"I_H2kz-72M",
"1jVUti2kkB4"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040452881,
1606207514441,
1606013623027,
1606013529795,
1606013386682,
1606013276372,
1606012460663,
1605963711227,
1604161002236,
1604004712556,
1603988938738,
1603802693484
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3639/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3639/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3639/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3639/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3639/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3639/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3639/Area_Chair1"
],
[
"ICLR.cc/2021/Conference/Paper3639/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3639/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3639/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3639/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes a Conditional Masked Language Modeling (CMLM) method to enhance the MLM by conditioning on the contextual information.\\n\\nAll of the reviewers think the results are good. However, the reviewers also think the intuition and experiments are not so convincing. The responses and revisions still not satisfy all the reviewers' major concern.\"}",
"{\"title\": \"Response to Reviewer 1\\u2019s reply\", \"comment\": \"We appreciate Reviewer 1\\u2019s timely response to our rebuttals. Thank you! Please find below our answers to questions raised Reviewer 1.\\n\\n**\\u201cThe paper would be far more compelling if the authors can provide strong evidence that the sentence embeddings do well on tasks where using a BERT model is either less effective due to performance or computational reasons.\\u201d**\", \"we_actually_have_provided_results_on_such_tasks_that_reviewer_1_asks_for\": [\"On Amazon Review Dataset, we did try finetuning BERT (Table 4, \\u201cEncoder parameters are trained during finetuning\\u201d) with in-domain data. In all 4 languages, our models (CMLM, CMLM+BR, CMLM+BR+NLI) have much better performance than finetuned BERT (e.g., 88.6% v.s. 74.0% classification accuracy on Japanese). Especially note that CMLM without finetuning (row \\u201cCMLM\\u201d in the first group in Table 4) even outperforms finetuned mBERT (row \\u201cmBERT\\u201d in the second group). This shows that finetuning is not necessarily the only way to produce good embeddings; a well-pretrained sentence representation like CMLM can generate powerful representations..\", \"We further evaluate languages in the Tatoeba Dataset (table 5). In all 36 languages, our models outperform BERT by significant margins. Concretely, the average retrieval accuracy of our model is 94.7% v.s. 38.7% of mBERT. We believe these two evaluations reflect the case where \\u201ca BERT model is less effective due to performance (than CMLM)\\u201d.\", \"**\\u201cWhile SentEval is a useful benchmark to evaluate sentence representations, it doesn't reflect well how these representations will be used in practice. A fine-tuned BERT model will likely perform strongly on these tasks.\\u201d**\", \"In practice, there are many use cases where sentence representations are needed. For example, pre-encoding sentences for retrieval (text records search). Sentence embeddings are still one of the best choices for clustering, retrieval, and modular use of text representations for downstream tasks.\", \"**From these questions raised by Reviewer 1, we feel like the reviewer may have a concern about the research direction of sentence embeddings. Is sentence representation a research direction worth studying when we already have BERT? Why not just finetune BERT?**\", \"In cases where supervised data are unavailable and you cannot finetune BERT, e.g., clustering and retrieval, having a general-purpose sentence encoding system is crucial for problems.\", \"Actually in practice, finetuning BERT can result in a drop in performance. In some in-house applications, we observe that finetuning BERT actually yields worse results, e.g. tasks with a single sentence as input. Maybe this is because finetuning makes BERT forget what it learns in pretraining. But in the BERT original paper, the GLUE tasks that BERT shows obvious advantage are those with pair-wised inputs, where BERT-style fine-grained interactions in the finetuning are at advantage. For tasks with single input, there is no strong evidence that a sentence encoding system cannot perform as well as BERT.\", \"In many non-NLP fields, sentence representations are pre-fixed input features to other systems, which is the same setting that SentEval holds. For example, in Biology [4] and Social Network Analysis [5, 6]. That\\u2019s part of why sentence encodings systems like USE (one of most downloaded pre-trained text modules in Tensorflow Hub) and InferSent are still widely used, even after BERT is introduced.\", \"\\u201cHow much information and what information we can encode into one sentence\\u201d is still an open-ended research problem in NLP [1]. CMLM and explorations on language-agnosticism in this paper provide some insight to this research question.\"], \"reference\": \"[1] Conneau, Alexis, et al. \\\"What you can cram into a single $ &!#* vector: Probing sentence embeddings for linguistic properties.\\\" ACL, 2018.\\n\\n[2] Cer, et al. \\\"Universal sentence encoder.\\\" arXiv preprint arXiv:1803.11175 (2018).\\n\\n[3] Conneau, Alexis, et al. \\\"Supervised Learning of Universal Sentence Representations from Natural Language Inference Data.\\\" EMNLP 2017.\\n\\n[4] Chen, Qingyu, Yifan Peng, and Zhiyong Lu. \\\"BioSentVec: creating sentence embeddings for biomedical texts.\\\" 2019 IEEE International Conference on Healthcare Informatics (ICHI). IEEE, 2019.\\n\\n[5] Mishra, Rohan, et al. \\\"SNAP-BATNET: Cascading author profiling and social network graphs for suicide ideation detection on social media.\\\" Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Student Research Workshop. 2019.\\n\\n[6] Wang, Qiaozhi, et al. \\\"# DontTweetThis: Scoring Private Information in Social Networks.\\\" Proceedings on Privacy Enhancing Technologies 2019.4 (2019): 72-92.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"Thanks for your review and valuable feedback! Please find our responses below:\\n\\n**1. About \\\"experiments are weak\\\".**\\n\\nThe evaluation benchmarks in our paper are widely adopted in previous text representation learning works. For example, SentEval [1,2,3], Amazon Reviews [4,5,6] and Tatoeba [7, 8]. The benchmarks i the paper thoroughly examine the capability of a representation system, including semantic alignment (Tatoeba), transfer learning to various kinds of downstream tasks (sentiment analysis (SST, MR, CR), semantic classification, NLI (SICK-E), text similarity (SICK-R) and paraphrase detection (MRPC) in SentEval, XEVAL) and zero-shot cross-lingual transfer learning (Amazon Reviews). Our models consistently show strong performance across these benchmarks. We believe our evaluations are detailed and in-depth.\\n\\n**2. About \\\"Model is largely based on prior work...\\\"**\\n\\nWe provide a summary of main contributions of this paper in the last paragraph in the introduction section. To reiterate, we propose an unsupervised sentence representation learning method CMLM. To the best of our knowledge, CMLM is a novel model architecture proposed for the first time.\\n\\n**3. About \\\"Weak Baselines\\\".**\", \"baseline_models_considered_in_the_paper_include_many_sota_sentence_representation_models\": \"SkipThought, QuickThought, InferSent, USE and LASER. As shown in Table 1-5, CMLM consistently outperform baseline models. To address the effects from differences in training data, we train multiple baselines with the same training data of CMLM, e.g. QuickThought (CC), English BERT large/base (CC). As shown in table 1, CMLM outperform these baselines that are trained with the same data resources.\\n\\n**4. About \\\"Evaluation protocol\\\".**\\n\\nFollowing your suggestion, we evaluate our models on Tatoeba [7, 8], a multilingual retrieval benchmark, as shown in Table 5. Our models \\u201cCMLM+BR\\u201d outperforms all baseline models by a significant margin in terms of the average performance. It has the highest accuracy in 30 out of 36 languages.\\n\\n**5. About \\\"Data used for pretraining: It is difficult to ...\\\"**\\n\\nFollowing your suggestion, we train QuickThought, an unsupervised sentence representation learning method, using the same Common Crawl dumps that our models are trained on. To address the possible advantage coming from the Transformer, we use a transformer encoder in QuickThought instead of a GRU (RNN) in the original QuickThought implementation. The model is denoted as QuickThought (CC) in Table 1. Using a transformer encoder and Common Crawl does not make QuickThought better than our model. Also notice that the model XLM-R also uses Common Crawl corpora. Results in Table 2 and 4 shows that our model CMLM still outperforms XLM-R.\\n\\n**6. About \\\"paper presentation and organization\\\".**\\n\\nThanks for the suggestion! We\\u2019ve edited the paper to make the story more coherent following your suggestion. Especially, we added a paragraph in the introduction section (the second last one) to describe how the paper is organized.\\n\\n**7. Extra evaluations for multilingual representations on existing benchmarks.**\\n\\nThis is a good idea! As mentioned above, we\\u2019ve evaluated on the Tatoeba dataset (table 5). Besides XEVAL, the multilingual representations are also evaluated on Amazon Reviews. On both Amazon Reviews and Tatoeba, our models outperform all baseline models.\\n\\n**8. About \\\"What resources are used to train each method\\\".**\\n+ For English and Multilingual CMLM, training data (sec. 3 and sec. 4.1) are generated from three Common Crawl dumps (2020-05, 2020-10, 2020-16, see https://commoncrawl.org/the-data/get-started/). English CMLM takes ~5 days using 64 Cloud TPUs (128 TPU chips total). Training Multilingual CMLM takes ~12 days using 64 Cloud TPUs.\\n+ Multitask co-training CMLM+BR takes ~5 days 64 Cloud TPUs. Information about BR training data can be found at sec. 4.2.\\n+ Cross-lingual NLI finetuning takes ~12 hours using 8 cloud TPUs. Information about data used for cross-lingual finetuning can be found at sec. 4.3.\", \"references\": \"[1] Cer, et al. \\\"Universal sentence encoder.\\\" arXiv preprint arXiv:1803.11175 (2018).\\n\\n[2] Conneau, Alexis, et al. \\\"Supervised Learning of Universal Sentence Representations from Natural Language Inference Data.\\\" EMNLP 2017.\\n\\n[3] Yang, et al. . \\\"Parameter-free Sentence Embedding via Orthogonal Basis.\\\" EMNLP-IJCNLP 2019.\\n\\n[4] Zhou, et al. \\\"Cross-lingual sentiment classification with bilingual document representation learning.\\\" ACL 2016.\\n\\n[5] Chidambaram, et al. \\\"Learning Cross-Lingual Sentence Representations via a Multi-task Dual-Encoder Model.\\\" RepL4NLP-2019. 2019.\\n\\n[6] Xu, et al. \\\"Cross-lingual Distillation for Text Classification.\\\" ACL 2017.\\n\\n[7] Artetxe, et al. \\\"Massively multilingual sentence embeddings for zero-shot cross-lingual transfer and beyond.\\\" ACL 2019.\\n\\n[8] Hu, et al. \\\"Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization.\\\" ICML 2020.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"Thanks for your review and valuable comments! Please find our responses below:\\n\\n**1. \\\"What is the intuition for the proposed Model?\\\"**\\n\\n- As mentioned in the second paragraph in section 2, the intuition behind CMLM is \\\"to make the encoder produce an effective sentence level embedding of the first sentence for better MLM prediction of the second sentence\\\". We followed your suggestion by modifying the introduction section, especially the second paragraph, to make the intuition behind CMLM clearer.\\n- The intuition for bitext retrieval (BR) is to make the multilingual representation language agnostic, which is confirmed by the outstanding zero-shot cross lingual performance on Amazon Reviews (Table 4) and high cross lingual accuracy on Tatoeba (Table 5).\\n- The intuition for cross lingual NLI finetuning is to provide supervised learning signals and improve the quality of sentence representations. This is supported by the significant improvements from NLI finetuning on Amazon Reviews Dataset (Tabel 5, between rows \\u201cCMLM+BR\\u201d and \\u201cCMLM+BR+NLI\\u201d) and XEVAL (Table 3, between rows \\u201cS3\\u201d and row \\u201cS3+NLI\\u201d).\\n\\n**2. \\\"CMLM looks similar to SkipThought. The baselines used for comparison are not complete since authors should compare with baselines (e.g. SkipThought) using Transformer.\\\"**\\n\\nFollowing your suggestion, we implement QuickThought (a more recent and better unsupervised sentence representation learning model than SkipThought) with Transformer and train with our data. It is denoted as \\u201cQuickThought (CC)\\u201d in Table 1. Leveraging our data or Transformer does not make QuickThought better than our models.\", \"also_note_that_our_model_differs_from_skipthought_in_the_following_aspects\": \"+ SkipThought relies on an extra decoder network while CMLM only has the encoder. \\n+ SkipThought predicts the entire sentence while CMLM predicts masked tokens only so the predictions can be done in parallel.\\n\\nThese two differences make CMLM more efficient to train when compared with SkipThought.\\n\\n**3. \\\"Are there any other ways besides using \\\"concatenation\\\" in CMLM? It can be more convincing to see some analysis or results here.\\\".** \\n\\nYes. And analysis and results were already in the paper. We are sorry if results were not presented more explicitly. Concretely, besides the \\u201cconcatenation\\u201d, we also tried the \\u201cskip connection\\u201d configuration in CMLM. Results using this \\u201cskip connection\\u201d are presented in Table 7, row \\u201cskip\\u201d. The model architecture of \\u201cskip connection\\u201d is as follows. Given two sentences $s_1$ and $s_2$. By inputting $s_2$ to the transformer encoder, we obtain an output $M \\\\in R^{H \\\\times L}$, where H denotes the hidden size and L denotes the maximum token length. Recall the sentence representation of s1 is computed as $v \\\\in R^{H}$. We then concatenate $v$ to each column $m_i$ ($i = 1,2,\\\\dots,L$) in $M$. The concatenated tensor $M'$ is of size $2H\\\\times L$, We then use $M\\u2019$ as the input for masked token prediction in $s_2$. As shown in table 2, our current configuration \\u201cconcatenation\\u201d is better.\\n\\n**4. Citations.**\\n\\nThanks for pointing out this paper! We\\u2019ve cited the DeCLUTR paper as suggested.\\n\\n**5. We add extra experiments on Tatoeba Semantic Retrieval Dataset to the paper.**\\n\\nWe further evaluate our models on the Tatoeba dataset, as shown in Table 5. Our models \\u201cCMLM+BR\\u201d outperforms all baseline models by a significant margin in terms of the average performance. It also has the highest accuracy in 30 out of 36 languages.\"}",
"{\"title\": \"Response to Reviewer 4, Part 1\", \"comment\": \"Thank you for the detailed comments and insightful suggestions! Please find our responses below:\\n\\n**Overall:**\\n1. About SentEval: we\\u2019ve added results on SICK-R in Table 1. All the numbers in Table 2 and Table 3 are updated by including SICK-R performance on XEVAL.\\n\\n2. We\\u2019ve added XLM-R performance on Amazon Reviews (Table 4). We also manage to improve the multilingual sentence representations by applying smoothing of the volume of data per language during pretraining (Table 2,3,4).\\n\\n3. To have a more thorough understanding of the sentence representations learnt by our models, we added another experiment using the Tatoeba task from the XTREME benchmark (Hu. etc, 2020) dataset that covers 36 languages, shown in Table 5. Our models \\u201cCMLM+BR\\u201d outperforms all baseline models by a significant margin in terms of the average performance. It also has the highest accuracy in 30 out of 36 languages.\\n\\n4. Thanks for pointing out the typos and SBERT naming suggestions! We\\u2019ve corrected them in the paper.\\n\\n**Sec 1, Sec2:**\\n\\n1. The quantitative comparison of using MEAN, MAX and CLS is included in the appendix (Table 8).\\n\\n2. About siamese networks: we\\u2019ve added a footnote on page 3 regarding the name.\\n\\n3. $v_d$ should be $v_p$. We\\u2019ve corrected this typo in the revision.\\n\\n**Sec 3:**\\n\\n1. About \\u201cflip-flopping\\u201d and \\u201cchallenging\\u201d: We\\u2019ve removed the description of \\u201cmore challenging\\u201d in the first paragraph of section 2. We also add a sentence in the same paragraph to point out the connection between the \\u201corder-swapping\\u201d in our model and Skip-Thought predictions.\\n\\n2. About masking rates: We add analysis on the ablation study of masking ratios in the appendix (Table 9). By \\u201cchallenging is better\\u201d, we mean though low masking ratios yield higher CMLM accuracy in training, it doesn't produce better sentence representations.\\n\\n3. About SentEval \\u201csubset selection\\u201d and SBERT: We\\u2019ve included SICK-R results following your suggestion. We\\u2019ve included the SentEval tasks that SBERT presented in their original paper (see section 5 in SBERT paper). Also notice that CMLM is only trained on unlabeled corpora while SBERT also uses supervised NLI data. Even so CMLM achieves competitive results. This indicates that CMLM, as an unsupervised sentence representation learning model, is able to obtain sentence representations as good as (if not better) supervised learning models.\\n\\n4. About the length 256: The length refers to the maximum length.\\n\\n**Sec 4:**\\n\\n1. About batch size in BR: In general, larger batch sizes improve performance until we reach ~2048, since each example will see more \\u201cmismatched\\u201d examples. After 2048, we don\\u2019t see obvious improvements in performance from increasing batch size. We\\u2019ll add detailed results on this in the final version.\\n\\n2. The margin m is set to be 0.3. We\\u2019ve added this in the paper.\\n\\n3. About the number of projections: We\\u2019ve included results of N=20 in Table 7. It shows N=15 has a better overall performance than N=20.\\n\\n4. The translation is done by Google Translate. The score for each language in XEVAL is computed as the average of performance on tasks MR, CR, SUBJ, MPQA, SST, TREC, MRPC, SICK-E and SICK-R as in the evaluations for English models.\\n\\n5. About using the concatenation of u,v, u-v and u*v: We\\u2019ve cited previous works using this method.\\n\\n6. About BR \\u2192 CMLM+BR: The suggested path is a great addition. We\\u2019ll include the results in the final version. We choose to experiment with CMLM\\u2192 CMLM+BR because we notice that BR converges faster than CMLM, therefore we train CMLM first.\\n\\n7. About how training step is determined: The training step is determined by the masked token prediction accuracy (CMLM) and retrieval accuracy (BR) on the validation set.\\n\\n8. About [CLS] and max-pooling representations: The performance drop of mBERT and XLM-R using [CLS] and max-pooling is very similar to the trend in Table 8 (in appendix)\"}",
"{\"title\": \"Response to Reviewer 4, Part 2\", \"comment\": \"**Sec 5**\\n\\n1. Our claim in the paper is \\u201cthe first principal components in each monolingual space **primarily** encodes language information\\u201d, not \\u201conly encodes\\u201d. It does not only encode language identification information because note in Table 6, in most cases PCR yields better retrieval performance. For some languages, PCR makes the retrieval performance drop a bit, which indicates that principal components can still contain semantic information.\\n\\n2. Figure 3 is a 2D PCA. The x and y axis is the direction of first and second maximum variation through the data. Using silhouette coefficient is a good idea! We\\u2019ll add that in the final version.\\n\\n3. Explanation for \\u201cConcatenated with the sequence outputs of $s_2$\\u201d: Given two sentences $s_1$ and $s_2$. By inputting $s_2$ to the transformer encoder, we obtain an output matrix $M \\\\in R^{H\\\\times L}$, where $H$ denotes the hidden size and $L$ denotes the maximum token length. Recall the sentence representation of $s_1$ is computed as $v$ (of size $H$). We then concatenate $v$ to each column vector $m_i$ ($i = 1,2,\\\\dots,L$) in $M$. The concatenated tensor $M\\u2019$ is of size $2H\\\\times L$, We then use $M\\u2019$ as the input for masked token prediction in $s_2$.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for your review and valuable suggestions! Please find our responses below:\\n\\n1. The architecture of the 3-layer MLP projection $P$ is as follows. Let h denote the dimension of the input sentence vector (e.g. h = $768$ in BERT base; $h = 1024$ in BERT large). Let FC ($a$, $b$, $c$) denote a fully connected layer with input dimension $a$, output dimension $b$ and nonlinearity function $c$. The three layers are FC($h$, $2h$, ReLU), FC($2h$, $2h$, ReLU), FC($2h$, $h$, None). The information has been added to the appendix. \\n\\n2. Code and reproduction: we are working on making the code available publicly. Also we will release pretrained models to the public so that researchers can reproduce the results and leverage the model for their own projects. Links will be posted here once available.\\n\\n3. To have a more thorough understanding of the sentence representations learnt by our models, we have included an additional experiment on the Tatoeba task from XTREME benchmark [1] that covers 36 languages, shown in Table 5. Our model \\u201cCLM+BR\\u201d has the highest average performance and outperforms all baseline models on 30 out of 36 languages. \\n\\nIf you have any other questions, please let us know!\", \"references\": \"[1] Hu, Junjie, et al. \\\"Xtreme: A massively multilingual multi-task benchmark for evaluating cross-lingual generalization.\\\" ICML 2020.\"}",
"{\"title\": \"The discussion stage is open!\", \"comment\": \"Dear Reviewers:\\n\\nThanks for your insightful reviews! Now the discussion stage is open and the authors have posted their responses. We will appreciate that the following things-to-do can be done by Tues, Nov 24.\\n\\n1 Acknowledge explicitly that you have read the responses.\\n\\n2 Modify your review if necessary.\\n\\n3 Communicate with the authors/reviewers/AC by adding/responding to the comments if necessary.\\n\\nThanks a lot!\"}",
"{\"title\": \"The results are good but mainly empirical\", \"review\": \"This paper presents Conditional Masked Language Modeling (CMLM), which integrates sentence representation learning into MLM training by conditioning on the encoded vectors of adjacent sentences. It is shown that the English CMLM model achieves strong performance on SentEval, and outperforms models learned using (semi-)supervised signals. It is also found that a multilingual CMLM model co-trained with bitext retrieval (BR) and natural language inference (NLI) tasks outperforms the previous state-of-the-art multilingual models by a large margin. The paper further proposes a principle component based approach to remove the language identifying information from the representation while still retaining sentence semantics.\\n\\n-Strengths\\n\\nLearning sentence representations on large scale unlabeled corpora is an important research problem. This paper presents a heavily empirical study, with a series of experiments to evaluate the proposed sentence representation learning method. Multilingual experiments are conducted, with interesting results on language agnostic.\\n\\nThe proposed method, as shown in Figure 1, is somewhat new.\\n\\n-Weaknesses\\n\\nThe study is mainly empirical.\\nThe authors should provide more details about the three-layer neural network as the projection P (\\u00b7).\\nAnother concern is that the contribution of this paper to research community may be weak, if the code is not released and the results are not easily reproduced.\\n\\n--------update after reading the response-----------\\n\\nThanks for the authors' response. Mainly empirical and limited in methodology novelty. So I tend to keep the score.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"SkipThought + MLM: worth exploring, more details would improve the article\", \"review\": \"The authors present conditional masked language modeling (CMLM), a new method for unsupervised pretraining, in which the skip-thought notion of conditioning on neighboring sentences is adopted for masked language modeling. The upshot of the proposed approach is that it generates single sentence embeddings that perform competitively on SentEval. In the multilingual setting, the authors combine their CMLM method with a bitext retrieval objective (selecting a sentence\\u2019s translation from the other sentences of the language in the batch) that increases performance on a version of the SentEval tasks translated into 14 other languages. In their analysis, the authors make further claims about multilingual embeddings capturing language ID information in their first principle components, a conclusion somewhat substantiated by their results. The authors provide a small amount of ablation experiments for experimental/model design choices.\\n\\nThe underlying idea is worth pursuing, the execution and description could be improved, people will be interested in the results that are present (but then have questions).\\n\\n\\nOverall\\n\\nWhy only a subset of SentEval for the English experiments (3.2) but then the full SentEval in the multilingual XEval experiment (4.5.1)? Especially if you are trying to make a single-sentence encoder but then evaluating on SICK-E instead of SICK-R, which is arguably a more applicable eval set. \\n\\nWhy no XLM-R in the amazon reviews analysis (4.5.2)?\\n\\nSec 1.\", \"fig_1\": \"box of chocolate*s*\\n\\n\\nSec 2.\\n--\\u201cconditional\\u201d, \\u2192 \\u201cconditional,\\u201d\\n\\n--no quantitative comparison of using max vs mean pool vs CLS embedding\\n\\n--first sentence is feed \\u2192 first sentence is fed\\n\\n--three-layer neural network \\u2192 three-layer MLP\\n\\n--refer to using the same set of encoder weights for different inputs as siamese networks, as done in the sentence-bert paper https://en.wikipedia.org/wiki/Siamese_neural_network\\n\\n-- v_d is used but not defined\\n\\n\\nSec 3.\\n--Skip-thought originally used a sentence to predict the generation of both the preceding and succeeding sentences. This is functionally equivalent to your flip-flopping the order of the consecutive sequences. I would make the point that these steps are equivalent. Note that this is also not necessarily making the task \\u201cmore challenging\\u201d (and moreover I am not sure why \\u201cmore challenging\\u201d equates with \\u201cbetter pretraining method for language understanding\\u201d -- and an ablation of this step is not included to show that it is in fact necessary and useful).\\n\\n-- similarly, no analysis of masking rate, nor explanation for why \\u2018more challenging\\u2019 is better.\\n\\n-- \\u201cWe explore two transformer configurations, base and large, same as in the original BERT paper.\\u201d \\u2013 fragmented\\n\\n-- The number of projections N = 15. \\u2013 fragmented\\n\\n-- SentBERT \\u2192 SentenceBERT or SBERT\\n\\n-- On the specific subset of SentEval tasks you\\u2019ve selected, the majority of the performance discrepancy is in the SICK-E task--otherwise, the overall #\\u2019s are rather interchangeable. How does this change if you add in the rest of the SentEval tasks, and why were they omitted? Analysis/exploration for why you get such a performance boost only on SICK-E would also be useful.\\n\\n-- \\u201cthe length \\u2026 set to be 256 tokens\\u201d: please clarify whether the \\u201clength\\u201d refers to the maximum length, or each sentence is a fixed-length chunk consisting of 256 tokens\\n\\n--typo: \\u201cwe also exploring\\u201d\\n\\n\\nSec 4.\\n--If your introduced bitext retrieval objective uses batch size, experiments comparing the effect of batch size is necessary. \\n\\n-- Please specify the value of the margin m being used in the experiments\\n\\n--Choice of number of projections is also not motivated (and in fact contradicted by the ablation experiment finding that 15 is better)\\n\\n-- the motivation and contribution for XEVAL are great-- the explanation of the dataset is lacking. What translation API was used? How was the XEVAL score computed for each language? Is it the full set of SentEval downstream tasks?\\n\\n-- cite precedent for using the concatenation of u,v, u-v and u*v. (or show its effect via ablation)\\n\\n-- BR \\u2192 CMLM+BR configuration not evaluated\\n\\n-- choice of different training step #\\u2019s in each configuration is not particularly motivated.\\n\\n-- \\u201cafter exploring options including [CLS] representations and max pooling.\\u201d what was the performance drop?\\n\\n-- typo: \\u201chas a significant upon mBERT\\u201d\\n\\n\\nSec 5.\\n--It is not clear that you can make the claim that the first PC *only* encodes language-identification information?\\n\\n--I assume Figure 3 is 2-dimensional TSNE (needs a cite), which comes with its own set of caveats as a visualization tool. Quantitative clustering analysis such as silhouette coefficient might be more appropriate than a plot. If Figure 3 is not t-SNE, please specify the meaning of X and Y axes.\\n\\n-- did not try higher than n=15 projections but claimed it was optimal\\n\\n-- The description of the \\u201cskip\\u201d ablation is unclear: please clarify what is meant by \\u201cconcatenated with the sequence outputs of s2\\u201d.\\n\\n-- typos: \\u201cremoving the first principal component \\u2026 effectively eliminate\\u201d, \\u201cfor both two models\\u201d, \\u201crepresentations \\u2026 generally exhibits\\u201d\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A reasonable work, but a bit limited in terms of technical contribution, particularly considering that there is not a good intuition explanation.\", \"review\": \"I appreciate the response from the authors to my review, as well as to others.\\n\\nMy concerns on the intuition are most not solved. Although in this DNN dominating era, we cannot expect the explainability as we had before, I still believe that a solid work should be grounded on a reasonable basis, which could be in a high level, such as BERT and SBERT. Let's refer to the example given in the model architecture. The projection of the sentence vector of \\\"Life is a box of chocolates\\\" is left-concatenated with the masked embeddings of the second sentence. This operation is very much lacking in intuition, how come the projection of a sentence representation can be concatenated with the embeddings? In addition, \\\"The second encoder shares the same weights with the encoder used to embed s1\\\", considering their inputs are very different, weight sharing for the two encoders are also problematic.\\n\\nAnother point I just noticed, although the authors claimed that their model is better than SBERT, and did a comparison with SBERT-large, they did not compare with SBERT-base, which makes the conclusion unreliable.\\n\\n---------------------------------------------------------------------------------------------------------------------------\\nThis paper proposes a method called \\\"Conditional Masked Language Model\\\" for unsupervised sentence representation learning. The method involves two-sentence encoder, where one sentence depends on the encoded sentence level representation of the adjacent sentence. The experimental results are good overall, as the proposed method tends to give the best results across monolingual and multilingual benchmark datasets.\\n\\nThere are still some concerns about the novelty of this paper. First, I think the explanations for the intuition of the proposed model can be clarified, especially in the introduction section. Second, the baselines used for comparison are not complete, which makes me concern about the effectiveness of the model. The proposed model is the combination of Skip-Thought (Kiros et al., 2015) and BERT masked LM (Devlin et al., 2019). Their experimental results show a detailed comparison of BERT but ignore much about the Skip-Thought. Although the authors mentioned the results of the Skip-Thought model on the SentEval benchmark, the encoder used in the Skip-Thought (Kiros et al., 2015) is RNN while the author used the Transformer Encoder. I would appreciate a better and fair comparison of the Skip-Thought model by using same transformer encoder and same training corpus.\\n\\nI am not sure why you used the concatenation when you do the masked LM. Are there any other ways to do that? It can be more convincing to see some analysis or results here. Additionally, there is another work titled \\u201cDeCLUTR: Deep Contrastive Learning for Unsupervised Textual Representations\\u201d, which also focuses on unsupervised sentence representation learning. Although it is from arxiv, it would be nice that the authors can mention this work.\\n\\nIt seems good that the authors performed many experiments over many different datasets across monolingual and multilingual. The exploration of the same language bias of the learned representations is also very interesting.\\n\\nTo summarize, the paper is a bit limited in terms of technical contribution, particularly considering that there is not a good intuition explanation, but some analysis in this paper looks good.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Low significance, issues in experiments and setup\", \"review\": \"----------\\n\\nI appreciate the response from authors and the additional experiments. I do think the semantic search task adds value to the paper. However, the paper continues to be centered around the SentEval benchmark results. While SentEval is a useful benchmark to evaluate sentence representations, it doesn't reflect well how these representations will be used in practice. A fine-tuned BERT model will likely perform strongly on these tasks. The paper would be far more compelling if the authors can provide strong evidence that the sentence embeddings do well on tasks where using a BERT model is either less effective due to performance or computational reasons. \\n\\nI prefer to keep my score. \\n\\n------------\\n\\nThis work proposes a self-supervised training objective called CMLM (conditional masked language model) for learning sentence representations. An encoder produces multiple fixed length representations of a given sentence and a decoder reconstructs the adjacent sentence given it\\u2019s masked version and the encoded representations. CMLM performs well on the SentEval benchmark. CMLM is further extended to the multilingual setting via bi-text retrieval contrastive training and training on NLI data. The multilingual version is shown to work well on multiple translated versions of the SentEval benchmark (SentEval data translated into other languages using an off the shelf translation system) and Amazon reviews (sentiment classification). \\n\\n\\nPros\\n* This work addresses the important problem of (unsupervised) sentence representation learning. Extracting fixed-length sentence representations from popular language model based encoders is a non-trivial problem and this work attempts to provide a solution.\\n* Experiments go beyond the standard English setting and evaluate sentence representations in the multilingual setting as well. \\n* Interesting modeling approaches.\\n\\nCons\\n* Experiments are weak. It is unclear to what extent the tasks + evaluation protocol considered here are reflective of language understanding. I don\\u2019t think strong baselines were considered. Some of the evaluation benchmarks considered seem arbitrary.\\n* Model is largely based on prior work. The main contribution is not clear. There are many settings considered in the paper and it is unclear if the proposed contributions are truly significant due to weak baselines are differences in data used for training different methods.\\n\\nThere are several issues with the experimental setup.\\n* Evaluation protocol: It is unclear if the evaluation protocol considered is measuring language understanding capability well. Representations from the encoders are held fixed and linear classifiers are trained on top of these fixed representations on downstream tasks using labelled data. To me, this is not a setting that demands sentence vectors. It only shows that the sentence vectors capture useful features. I would suggest focusing on a setting where the advantage of the sentence vectors can be demonstrated such as a retrieval problem.\\n* Baselines: It is unfair to compare the proposed method against baselines like BERT which are not designed to produce fixed length encodings. \\n* Data used for pre-training: It is difficult to gauge how good the method is in comparison to other models due to differences in the data used for pre-training. Ideally, there should be a table comparing models that use the exact same resources. In Table 1, although BERT-base/large is trained on the same data, it is not a strong baseline since mean pooled representations from the encoder are treated as a sentence representation. Ideally, the model should be compared against a skip-thought baseline or an unsupervised sentence representation learning method that uses the same resources for training. \\n\\nI would have expected the authors to evaluate multilingual representations on existing benchmarks as well. I don\\u2019t find the proposed benchmark XEVAL very convincing. Claims would have been stronger if authors had also included results on existing benchmarks. \\n\\nThe authors need to make it clear exactly what resources are used for training each method. \\n\\nPresentation can be improved, especially the organization of the paper. It is difficult to follow the paper and identify the main contributions in the current presentation. \\n\\nThe paper touches upon several things - conditional masked language modeling, bitext retrieval, NLI training, language agnosticism, etc, and I find the paper quite incoherent. I suggest the authors make a focused contribution and provide strong experimental evidence to support that contribution. Right now there\\u2019s too many things which makes it hard to make sense of the paper as whole.\\n\\nWhile the approaches considered in the paper have some merit, the significance of this work is unclear due to issues in the evaluation.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
5IqTrksw9S | GLUECode: A Benchmark for Source Code Machine Learning Models | [
"Anjan Karmakar",
"Julian Aron Prenner",
"Miltiadis Allamanis",
"Romain Robbes"
] | A multitude of machine learning models for source code have been proposed in the recent years capturing various aspects of the inherent rich structure and semantics of code. However, these models are commonly designed to perform well on a single task, failing to capture code's multifaceted nature. To address this, we present GLUECode, Global and Local Understanding Evaluation of Code, a benchmark of diverse tasks to evaluate machine learning models of source code.
Crucially, GLUECode accounts for the distinct characteristics of source code: (1) source code is highly structured and (2) source code is often composed of multiple interacting entities. Existing tasks incentivize researchers to create models and code representations that perform well on a single task - commonly focusing on local reasoning. GLUECode aims to allow researchers to experiment with multiple local and global source code representations, and evaluate these models on their ability to capture the diverse characteristics of source code, thus driving the community towards building robust source code models incorporating global reasoning.
We present results for several baselines. The GLUECode tasks are challenging for the evaluated baselines; no model achieves convincing performance across all tasks. This indicates that there is ample room for progress on GLUECode. | [
"benchmark",
"source code",
"code understanding",
"deep learning"
] | Reject | https://openreview.net/pdf?id=5IqTrksw9S | https://openreview.net/forum?id=5IqTrksw9S | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Tyy6yYmyFF",
"n9YCh6agl7b",
"NnhsZYuLPei",
"R-bSpu0h7C",
"s5w9HNVTv0u",
"2wxUw9lldR2",
"NAysJU2Mpj",
"LggrHoYYpXV",
"qGlhXs2QP2E",
"bq3dXflM3sl",
"xgVAhxnfJLn",
"cLZRceRY8C",
"PF3AhLPIoOF",
"ezoO6GZRMKg",
"TtE23Njh2Xv",
"_WpKDUDjXtO"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040510117,
1606249503446,
1605698336739,
1605544395050,
1605272276249,
1605264310462,
1605264264784,
1605263588003,
1605263539268,
1605263198578,
1605262848937,
1605262764492,
1603860889862,
1603820155782,
1603720310198,
1603714635601
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3636/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3636/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3636/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3636/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3636/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3636/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3636/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3636/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3636/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3636/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3636/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3636/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3636/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3636/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3636/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes a new source code modeling benchmark, with the unique twist being that we not only have code source text, but we also have build information, which allows extracting richer information to construct labels from. This enables, for example, a null pointer prediction task with labels coming from an inter-procedural static analysis tool. AC and reviewers agree that this is a valuable framing for a benchmark suite. Unfortunately, it\\u2019s not clear that the benchmark in its current form delivers on the promise of the framing. Much of the interest and novelty is limited to just the one NullToken task, and reviewers raise a number of concerns including dataset size and whether the task truly measures the inter-procedural reasoning that it sets out to measure. AnonReviewer2 raised some good questions here that the authors promised to address in a forthcoming comment, but that didn\\u2019t come before the discussion deadline. I\\u2019d encourage the authors to use the reviewer suggestions to more strongly establish that these tasks measure what they set out to measure, and also to consider adding other tasks that measure whether our ML models are capable of deeper / longer-range reasoning. In total, there is a lot of potential here, but the work needs another iteration before it\\u2019s ready for publication.\"}",
"{\"title\": \"Concern about missing classification tasks is reduced.\", \"comment\": \"Thanks for your response. I agree that adding tasks with a classification head may not always result in any additional gain in understanding models performance. Also, generation tasks may not impact global-reasoning performance differently than the classification task. I was considering scenarios where models designed for generation tasks could have different performance requirement than the models designed for classification tasks. Adding classification task would provide additional results from the benchmark for comparing different models designed for classification tasks.\"}",
"{\"title\": \"RE: Clarification about missing classification task that requires global reasoning\", \"comment\": \"****\\n**Concern 6:** The tasks that require global reasoning are mostly generation tasks in the benchmark. Therefore, the evaluation metrics could be less representative of global-reasoning performance of classification models.\\n\\nThere is no classification task in the benchmark that relies on the global properties. Currently, the evaluation of this property is represented by some generation tasks. As one of the motivation for the proposed benchmark is to introduce tasks that require an understanding of the global properties, a classification task with global scope would make the benchmark more complete.\\n\\n> We can include other tasks in an extended version of the benchmark with classification as the modus operandi for some global tasks; however, since our benchmark is more interested in how models approximate the sample data features, adding or adapting tasks with a classification head serves only as a peripheral need.\\n>\\n> We would like to better understand your concern about generation tasks hindering global-reasoning performance compared to classification tasks. Could you please explain why you believe this is the case?\\n\\n****\"}",
"{\"title\": \"Clarification about missing classification task that requires global reasoning.\", \"comment\": \"There is no classification task in the benchmark that relies on the global properties. Currently, the evaluation of this property is represented by some generation tasks. As one of the motivation for the proposed benchmark is to introduce tasks that require understanding of the global properties, a classification task with global scope would make the benchmark more complete.\"}",
"{\"title\": \"General clarifications\", \"comment\": [\"We would like to thank the reviewers for their time and valuable feedback.\", \"Below are some common clarifications for concerns shared by several reviewers.\", \"The goal of GLUECode is to provide a benchmark that tests both local and global properties. We thus try to balance the tasks for both settings, including tasks that have been addressed in the literature locally, but could benefit from more global information.\", \"The GLUECode dataset is extracted from a corpus of compilable Java code. What adds a greater value to our datasets, beyond simply scraping GitHub projects, is the added parsability and compilability of projects. Such a setting allows us to run a greater number of tools, which includes a variety of static analysis tools, to procure new labels and representations for additional tasks. Cross-file information, which is useful for the global tasks, could not have been made available otherwise.\", \"In keeping with the spirit of a public benchmark, we confirm that we do plan to release all the datasets and relevant code publicly.\", \"The performance for the transformer-model for the method call completion task is now available. The transformer accuracy for the method call completion task is 0.534 and is added to the updated version of the paper. The missing reference in Section 2.2 is now resolved.\"]}",
"{\"title\": \"Response to concerns Pt.I\", \"comment\": \"We would like to thank the reviewers for their time and valuable feedback. Below are some common clarifications for concerns shared by several reviewers.\\n\\n- The goal of GLUECode is to provide a benchmark that tests both local and global properties. We thus try to balance the tasks for both settings, including tasks that have been addressed in the literature locally, but could benefit from more global information.\\n \\n- The GLUECode dataset is extracted from a corpus of compilable Java code. What adds a greater value to our datasets, beyond simply scraping GitHub projects, is the added parsability and compilability of projects. Such a setting allows us to run a greater number of tools, which includes a variety of static analysis tools, to procure new labels and representations for additional tasks. Cross-file information, which is useful for the global tasks, could not have been made available otherwise.\\n \\n- In keeping with the spirit of a public benchmark, we confirm that we do plan to release all the datasets and relevant code publicly.\\n \\n- The performance for the transformer-model for the method call completion task is now available. The transformer accuracy for the method call completion task is 0.534 and is added to the updated version of the paper. The missing reference in Section 2.2 is now resolved.\\n \\n\\nBelow, we provide a response to your concerns:\\n\\n****\\n\\n**Concern 1:** Except for the null deference analysis, none of the tasks particularly require global reasoning. The NPath complexity, operator prediction, method naming and code completion are local in scope and have been solved in the literature as such. The paper conjectures that method naming and method call completion can benefit from global reasoning, but offers no evidence to that effect.\\n\\n \\n\\n> Clarification: Given that the transformer model performs quite well in comparison, for the first two tasks i.e. npath complexity and operator prediction, while struggling to score well on the other tasks provides initial evidence of needing more context information. However, we truly value your concern raised here, and it is clear that adding further baselines with global context will shed some light upon this issue. \\n> \\n> We would like to add that with the exception of npath complexity prediction and null token prediction, there is a good amount of related work that tackles these problems mentioned, but there is still ample room for improvement on these tasks.\\n\\n****\"}",
"{\"title\": \"Response to concerns Pt.II\", \"comment\": \"****\\n**Concern 2:** The abstract says that \\\"However, these models are commonly designed to perform well on a single task, failing to capture code\\u2019s multifaceted nature.\\\" I don't agree with that just because a paper targets a single task, it fails to capture the multi-faceted nature of code. There are ample examples in the literature which take many views (e.g., ASTs, control flow, data flow, etc.) into account while solving a particular task.\\n\\n> Clarification: We precede our statement with \\u201cA multitude of machine learning models for source code have been proposed in the recent years capturing various aspects of the inherent rich structure and semantics of code\\u201d acknowledging the existence of models which take many views into account. However, these models are commonly designed to perform well only on a single task.\\n>\\n> By code\\u2019s multifaceted nature, indeed we mean the broader general approximation for code. Yes, even though there are many examples in the literature which take into account several views such AST with control and data flow information, which are still few and far between, they still miss out on other code properties that might be relevant to further downstream tasks. In that context, our statement merely implies that while capturing code properties for solving individual tasks many other aspects are left aside.\\n\\n \\n\\n****\\n\\n \\n\\n**Concern 3:** The code completion task is restricted to method calls. Does this include predicting method names or their parameters also?\\n\\n \\n\\n> Clarification: At this point, the models just predict the method names, not the parameters. Predicting just the correct method name would suffice for now, hard enough as it is, because eventually when such completion models are deployed on usage-platforms such as IDEs, a correct method call prediction as top-1 prediction would be enough regardless of the number of parameters. In the future, once models are able to solve method call completion, predicting the correct parameters would be a good test to evaluate on.\\n\\n \\n\\n****\\n\\n \\n\\n**Concern 4:** The seq2seq baseline could benefit by an attention layer.\\n\\n \\n\\n> Clarification: When it comes to baselines, there are a number of combinations we could evaluate with. However, we would encourage the community working on learning-based models of code to further improve on the baseline models which we have evaluated.\\n\\n \\n\\n***\\n\\n \\n\\n**Concern 5:** There is no description of the task-specific layers in the Transformer baseline.\\n\\n \\n\\n> Clarification: It is a standard RoBERTa linear classification head with dropout, for the transformer model.\\n\\n \\n\\n****\\n\\n \\n**Concern 6:** The results for the completion task are not made available for review.\\n\\n \\n\\n> Clarification: The performance for the transformer-model for the method call completion task is now available. The transformer accuracy for the method call completion task is 0.534 and shall be added to the updated version of the paper.\\n\\n \\n\\n****\\n\\n \\n\\n**Concern 7:** The relative performance of the baselines on the NullToken task is surprisingly. The authors should explain this.\\n\\n \\n\\n> Clarification: The simpler models such as MLP, LSTM, and CNN seem to be faring somewhat better. Subsequent evaluations on the null token prediction task showed a variance in accuracies for the simpler models and we need to conduct further diagnostic studies on them. We can report on them more comprehensively in a short time.\\n\\n \\n\\n****\\n\\n \\n\\n**Concern 8:** I did not understand the argument against comparison with previous work in Sec 4.1.\\n\\n \\n\\n\\u201cSome of our tasks (code completion and method naming) exist in previous work. While comparing with the literature would be insightful, it is difficult, as our task formulation (and our dataset) are quite different.\\u201d\\n\\n \\n\\n> Clarification: Sorry for the misunderstanding. What we mean is that since we use both a different dataset and different evaluation metrics (for reasons mentioned in the paper), doing an apples-to-apples comparison is not feasible. Thus, while we could compare performance on our tasks with numbers published in the literature, any comparison would have to be taken with a grain of salt, which is why we refrain from doing so.\\n\\n \\n\\n****\\n\\n \\n\\n**Concern 9:** It seems that code duplication between training and test sets is not entirely ruled out. This should be fixed.\\n\\n \\n\\n> Clarification: We have carefully checked all of our datasets and can ensure that there is no duplicated code between the training and test sets. For two of the tasks with large number of samples, we even went a step further to ensure that the datasets are project-balanced, meaning that the test set only contains samples from projects not used in the training set and there is no duplicated code even for a large number of samples.\\n\\n****\", \"additional_clarifications_regarding\": [\"percentage of examples in which the paths span multiple methods\", \"evaluation on global models\", \"call-graph construction details\", \"will be added soon.\"]}",
"{\"title\": \"Response to concerns Pt.I\", \"comment\": \"We would like to thank the reviewers for their time and valuable feedback. Below are some common clarifications for concerns shared by several reviewers.\\n\\n- The goal of GLUECode is to provide a benchmark that tests both local and global properties. We thus try to balance the tasks for both settings, including tasks that have been addressed in the literature locally, but could benefit from more global information.\\n \\n- The GLUECode dataset is extracted from a corpus of compilable Java code. What adds a greater value to our datasets, beyond simply scraping GitHub projects, is the added parsability and compilability of projects. Such a setting allows us to run a greater number of tools, which includes a variety of static analysis tools, to procure new labels and representations for additional tasks. Cross-file information, which is useful for the global tasks, could not have been made available otherwise.\\n \\n- In keeping with the spirit of a public benchmark, we confirm that we do plan to release all the datasets and relevant code publicly.\\n \\n- The performance for the transformer-model for the method call completion task is now available. The transformer accuracy for the method call completion task is 0.534 and is added to the updated version of the paper. The missing reference in Section 2.2 is now resolved.\\n \\n\\nBelow, we provide a response to your concerns:\\n\\n****\\n\\n**Concern 1:** However, it is not clear what global context is considered and how it is incorporated by the benchmark. In a ML for SE work, researchers may use various global contexts such as UML diagrams, library/API dependency, inter-procedural data/control flow, commit data, etc. It is not clear how these global context information can be satisfied by the benchmark.\\n\\n \\n\\n> Clarification: To get as much as global context information possible for a target method, one could consider the context information of the entire project. Therefore, we catalog all the methods present in a project along with their different representation types, including the raw code text representation. Some of these representations contain data/control flow information inherently; while some other global information types can be derived from the raw code. Lastly, we provide the call-graph for the project, connecting all the callers and the callees of the target method, with the cataloged methods along with their representations. Provision of such information edifice allows us to incorporate a broad global context based on call-graphs.\\n>\\n>Thus the goal of the benchmark is to steer research in the direction of more global contexts, but this is only a first step. With regard to additional global information such as commits or UML models, additional contexts would be interesting, but we think it is probably too challenging as a first step.\\n\\n****\\n\\n**Concern 2:** The authors can also describe more about the unique advantages of using the proposed benchmark. Currently, they are already many public datasets released by various papers in this field (thanks to the open science policy). Also, it is easy for researchers to download a large amount of source code from open source websites (such as Github) themselves. They can also process the source code using existing static analysis tools to obtain the data they need and share the data.\\n\\n \\n\\n>Clarification: Although downloading a large set of projects from GitHub is possible, compiling those projects at scale and extracting semantic facts is a non-trivial task that none of the existing datasets perform. These semantic facts (e.g. inferred types, dependencies, call graphs, etc are an important aspect for reasoning at a more global level. Clearing this hurdle for other researchers is likely to significantly ease their work.\\n\\n\\n****\\n \\n\\n**Concern 3:** Currently, GLUECode only provides a few types of source code representations. In recent years, researchers have proposed many different ways of representing source code tokens and ASTs. As an example, the following works use different AST-based source code representations (and it is not clear if the benchmark could provide necessary information to support these representations):\\n\\n \\n\\nYao Wan, Zhou Zhao, Min Yang, Guandong Xu, Haochao Ying, Jian Wu, and Philip S. Yu. Improving automatic source code summarization via deep reinforcement learning. In ASE, pages 397\\u2013407. ACM, 2018.\\n\\n \\n\\nJ. Zhang, et al., A Novel Neural Source Code Representation based on Abstract Syntax Tree, In Proc. the 41th International Conference on Software Engineering (ICSE 2019), Montreal, Canada, 2019.\\n \\n\\n> Clarification: While we provide a single pre-processed AST representation for every sample, we think that post-processing it to transform it in another variant should be possible in general. Should there be a specific need that is not covered in our representation, we also provide the raw code of every data sample. From a reading of the papers mentioned, adapting our representation is feasible.\\n****\"}",
"{\"title\": \"Response to concerns Pt.II\", \"comment\": \"****\\n\\n**Concern 4:** The data quality should be discussed in detail, as low quality data will bias the analysis results. This is particularly important for a public benchmark. For example, if the benchmark contains a lot of duplicated code, the follow-up analysis will be misleading. Furthermore, software evolves. Very soon, new versions/commits will emerge. It is not clear if the evolution will degrade the data quality and the validity of the benchmark.\\n\\n \\n\\n> Clarification: We have carefully checked all of our datasets and can ensure that there is no duplicated code between the training and test sets. For two of the tasks with large number of samples, we even went a step further to ensure that the datasets are project-balanced, meaning that the test set only contains samples from projects not used in the training set and there is no duplicated code even for a large number of samples.\\n>\\n> With regards to the evolution of software, our datasets are derived from the 50K-C dataset (Martins et al., 2018) which a valid and compilable dataset accepted by the community. And as you ascertain that is not clear if the evolution will degrade the data quality in the future, in that case, our benchmark datasets and representations would still rely on the standard release of the 50K-C dataset. In general, evolution of datasets is a shared concern in many avenues of research, and more work in this area is needed.\\n\\n \\n\\n****\\n\\n \\n\\n**Concern 5:** The proposed benchmark data and code are not available for replication purpose.\\n\\n \\n\\n>Clarification: We plan to release all of the prepared datasets and the code for replication after the notification. Since this will be a public benchmark, anyone interested in participating is welcome to work on the datasets and evaluate their models.\\n\\n \\n\\n****\\n\\n \\n\\n**Concern 6:** In Table 2, the baseline result for Transformer-based method completion is missing.\\n\\n \\n\\n>Clarification: The performance for the transformer-model for the method call completion task is now available. The transformer accuracy for the method call completion task is 0.534 and is be added to the updated version of the paper.\\n\\n****\"}",
"{\"title\": \"Response to concerns\", \"comment\": \"We would like to thank the reviewers for their time and valuable feedback. Below are some common clarifications for concerns shared by several reviewers.\\n\\n- The goal of GLUECode is to provide a benchmark that tests both local and global properties. We thus try to balance the tasks for both settings, including tasks that have been addressed in the literature locally, but could benefit from more global information.\\n \\n- The GLUECode dataset is extracted from a corpus of compilable Java code. What adds a greater value to our datasets, beyond simply scraping GitHub projects, is the added parsability and compilability of projects. Such a setting allows us to run a greater number of tools, which includes a variety of static analysis tools, to procure new labels and representations for additional tasks. Cross-file information, which is useful for the global tasks, could not have been made available otherwise.\\n \\n- In keeping with the spirit of a public benchmark, we confirm that we do plan to release all the datasets and relevant code publicly.\\n \\n- The performance for the transformer-model for the method call completion task is now available. The transformer accuracy for the method call completion task is 0.534 and is added to the updated version of the paper. The missing reference in Section 2.2 is now resolved.\\n \\n\\nBelow, we provide a response to your concerns:\\n\\n****\\n\\n**Concern 1:** The operator prediction task seems \\u201ctoo easy\\u201d when only a single operator is masked. It is worth considering variations of this task when masking multiple operators.\\n\\n>Clarification: In case you refer to the transformer\\u2019s performance specifically, we think this could be due to the masked language modelling pretraining which might be an appropriate pre-training task for this specific case. In case you are thinking of something else, it would help if you could clarify a bit more. Regardless, constructing a multi-masked operator prediction task would make it much harder indeed.\\n\\n****\\n\\n**Concern 2:** For the code completion task, it should be clear whether comments are part of the permitted/desired prediction. In recent work, we are seeing increasing importance of natural language hints, and an explicit decision is required about this in the benchmark suite.\\n\\n>Clarification: Comments are available in the raw code representation for every data sample, it is up to the end-user to decide whether they\\u2019d like to use them in their chosen representations for their model predictions. Thus, the usage of comments for additional context is permissible for our benchmark datasets.\\n \\n****\\n \\n**Concern 3:** Can you please make the code and data available?\\n \\n\\n> Clarification: We plan to release all of the prepared dataset and the code for replication after the notification. Since this will be a public benchmark, anyone interested in participating is welcome to work on the dataset and evaluate their models, and use our code.\\n\\n****\"}",
"{\"title\": \"Response to concerns Pt.I\", \"comment\": \"We would like to thank the reviewers for their time and valuable feedback. Below are some common clarifications for concerns shared by several reviewers.\\n\\n- The goal of GLUECode is to provide a benchmark that tests both local and global properties. We thus try to balance the tasks for both settings, including tasks that have been addressed in the literature locally, but could benefit from more global information.\\n \\n- The GLUECode dataset is extracted from a corpus of compilable Java code. What adds a greater value to our datasets, beyond simply scraping GitHub projects, is the added parsability and compilability of projects. Such a setting allows us to run a greater number of tools, which includes a variety of static analysis tools, to procure new labels and representations for additional tasks. Cross-file information, which is useful for the global tasks, could not have been made available otherwise.\\n \\n- In keeping with the spirit of a public benchmark, we confirm that we do plan to release all the datasets and relevant code publicly.\\n \\n- The performance for the transformer-model for the method call completion task is now available. The transformer accuracy for the method call completion task is 0.534 and is added to the updated version of the paper. The missing reference in Section 2.2 is now resolved.\\n \\nBelow, we provide a response to your concerns:\\n\\n****\\n**Concern 1:** Although overall construction of the dataset could be useful for the community, sufficient evidence is not provided to establish the utility of the dataset compared to other existing datasets.\\n\\n \\n\\n> Clarification: The utility of our dataset and tasks is twofold: first, GLUECode is the only benchmark that provides tasks that both require local and global reasoning. Second, it provides the building blocks (including several base code representations) for researchers to experiment with the models that can solve the tasks. Additionally, GLUECode\\u2019s dataset is extracted from a corpus of compilable Java code. And beyond simply scraping GitHub projects, our dataset allows compilability of projects. Such a setting allows us to run a greater number of tools, which includes static analysis tools, to procure new labels and representations for additional tasks. Cross-file information (e.g. useful for completion and null dereference tasks) could not be made available otherwise.\\n \\n\\n****\\n**Concern 2:** An arbitrary minimum number of 50 files in a project is selected as a filtering method without presenting any supporting analysis.\\n\\n> Clarification: We used 50 as a heuristic for detecting small, possibly immature or toy projects. By filtering projects with more than 50 files (=classes in Java), we get a sufficient number of projects that have rich structure.\\n\\n****\\n\\n**Concern 3:** \\u201cNullToken\\u201d task is presented as the task that benefits most from global reasoning, which is the primary contribution of this paper. However, the dataset size for NullToken task is small, which significantly reduces the usefulness of the task in evaluating the models.\\n\\n>Clarification: We see the small number of samples in a different light: we see it as an incentive to promote more sample-efficient models, whether by leveraging pre-training or additional structure in the data. We also note that gathering even this limited amount of data is not trivial, as it involves running a costly static analysis on 5000+ compilable software projects that are large enough. \\n\\n****\"}",
"{\"title\": \"Response to concerns Pt.II\", \"comment\": \"****\\n**Concern 4:** The evaluation results of the baseline models are not well-explained.\\n> Clarification: The performance of the transformer-model for the method call completion task is now added to the updated revision. Subsequently we explain the results here.\\n> \\n> Overall, we see that the Transformer exhibits higher performance on the first four tasks (NPath prediction, Operator prediction, Method naming), and has reasonably acceptable performance only on the first two tasks (Npath prediction and Operator prediction), which are the most local ones.\\n> \\n> For the tasks which have some globalness aspect to it, the transformers have an average accuracy of ~40% with highest score being barely above the fifty percent threshold for the method call completion task. Even in the local tasks, where the transformers score well, there is still a margin for improvement of more than 20%.\\n \\n>**NPTH:** For npath complexity prediction task, the transformer model is the best performing model, with ~75% accuracy, followed by the sequence to sequence model with ~54% accuracy. The sequence to sequence model is able to encode the complexity from the input code into a single embedding which could then be rendered correctly as output. The transformer model using multi-head attention performs reasonably better. Further, the simple MLP shows the least-favorable performance for this task, with BiLSTM and CNN models doing marginally better.\\n\\n>**OPER:** For the operator prediction task, the transformer model performs the best, while the CNN seems to be worst-performing model for the given dataset. CNN\\u2019s are good at extracting position-invariant features, but since operator prediction needs important sequential information, it fares poorly in comparison.\\nThe BiLSTM model does comparatively good, as they are designed to make use of sequential data. Since, RNNs usually are good at predicting what comes next in a sequence, the BiLSTM model is second only to the transformer model. The sequence to sequence model does barely better than the baseline MLP, since sequence to sequence models encode the masked code to generate a single embedding which can effectively summarize certain properties of the code. And since operators do not represent a code property per se that can be translated into an output, at best the models could establish only simple associations between the input and output.\\n\\n>**NAME:** For methodnaming, the transformer model shows the best performance, followed by the sequence to sequence model, and then the BiLSTM model. For method naming, performance is much lower; it is also lower than in similar naming tasks, but having evaluated with different metrics, it shows that our choices yield a more challenging task.\\n\\n>**COMP:** Once again, for the method call completion task, the transformer model shows the best performance, followed by the sequence to sequence model, and then the BilSTM model.\\nIt is important to note here that unlike method naming, completion task has many labels (method api calls) which belong to the Java standard library, such as println(), toString() etc. which are commonly used, and which are easier to predict for DL models (Hellendoorn et al.,2019a). About 20% of the dataset consist of standard library method calls. This might explain why the models perform better in comparison solely against the method naming task.\\n\\n>**NTKN:** Finally, we observe that on the Null Token prediction task performance is very low, even the Transformer model is not faring well here, especially considering that a naive baseline would score 20%, and models are barely better than this, indicating opportunity for further progress. The simpler models such as MLP, LSTM, and CNN seem to be faring somewhat better. Subsequent evaluations on the null token prediction task showed a variance in accuracies for the simpler models and we need to conduct further diagnostic studies on them.\\n****\\n**Concern 5:** According to Table-2 the \\u201cCompletion\\u201d task requires increased non-structural and global reasoning compared to the \\u201cNaming\\u201d task. However, all the baseline models are showing poor performance in the \\u201cNaming\\u201d task compared to the \\u201cCompletion\\u201d task.\\n> Clarification: As mentioned, unlike naming, the code completion task has many labels (method calls) which belong to the Java standard library, such as println(), toString() etc. which are commonly used, and can be learned easily.\\n>\\n>About 20% of the dataset consist of standard library method calls. This might seem to help the models to perform better when comparing solely against the naming task. A brief explanation was given in Section 3.1 of the paper.\\n****\\n**Concern 6:** The tasks that require global reasoning are mostly generation tasks in the benchmark. Therefore, the evaluation metrics could be less representative of global-reasoning performance of classification models.\\n\\n> If you could clarify this statement a bit, we would be able to respond to your concern better.\\n****\"}",
"{\"title\": \"GLUECode: A Benchmark for Source Code Machine Learning Models\", \"review\": \"Reasons for score:\\nA benchmark for evaluating source code ML models will help to accelerate the progress in the right direction. However, the analysis to support the dataset and the proposed tasks as a benchmark does not address some critical concerns (please see weakness section below).\", \"summary\": \"The paper presents a dataset of source code that allows experimenting with different representations and proposes five tasks to evaluate local and global reasoning capabilities of a source code machine learning model.\", \"strength\": \"1. GLUECode provides a labeled dataset with different representations by compiling ~5300 Java projects extracted from Github. This could be useful for future research in ML modeling for source code.\\n2. Evaluation results of five baseline models on five proposed benchmark tasks demonstrate varying performances over different tasks. \\n\\nWeakness\\n1. Although overall construction of the dataset could be useful for the community, sufficient evidence is not provided to establish the utility of the dataset compared to other existing datasets. An arbitrary minimum number of 50 files in a project is selected as a filtering method without presenting any supporting analysis. \\u201cNullToken\\u201d task is presented as the task that benefits most from global reasoning, which is the primary contribution of this paper. However, the dataset size for NullToken task is small, which significantly reduces the usefulness of the task in evaluating the models.\\n2. The evaluation results of the baseline models are not well-explained. According to Table-2 the \\u201cCompletion\\u201d task requires increased non-structural and global reasoning compared to the \\u201cNaming\\u201d task. However, all the baseline models are showing poor performance in the \\u201cNaming\\u201d task compared to the \\u201cCompletion\\u201d task. \\n3. The tasks that require global reasoning are mostly generation tasks in the benchmark. Therefore, the evaluation metrics could be less representative of global-reasoning performance of classification models.\", \"questions_to_author\": \"Please address and clarify the cons above.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"useful suite\", \"review\": \"### Summary ###\\n\\nThe paper presents GlueCode, a new benchmark suite for evaluating source code learning models. The suite includes 5 tasks, two of which are classification tasks and three are sequence generation tasks.\\n\\n### Strengths ###\\n\\n* A standard benchmark for evaluating source code models would be a blessing. \\n\\n* The selected tasks are interesting and compelling. Particularly interesting are the tasks that require fine-grained reasoning about the control and data flow of programs. The balance between classification and generation tasks is also solid. Other design choices like focusing on the scope of a single method also seem well justified considering the common practice in this area. \\n\\n### Weaknesses ###\\n\\n* It is hard (impossible) to evaluate the contribution of this paper without looking at the actual code and data. The devil is in the details. On the face of it, the suggested benchmark suite seems reasonable. \\n\\n* The only contribution of this paper is the benchmark suite. There is no additional novelty. This is not really a weakness, just a comment. I think that we should accept benchmark papers that help move the research area forward.\\n\\n\\n### Comments ###\\n\\n* The operator prediction task seems \\u201ctoo easy\\u201d when only a single operator is masked. It is worth considering variations of this task when masking multiple operators. \\n\\n* For the code completion task, it should be clear whether comments are part of the permitted/desired prediction context. In recent work, we are seeing increasing importance of natural language hints, and an explicit decision is required about this in the benchmark suite. \\n\\n### Questions for Authors ###\\n\\n* Can you please make the code and data available? All the described tasks make sense, the choice of baselines looks good. \\n\\n### Minor questions and comments ###\\n\\n* \\\"Across all representations, source code entities (methods and classes) are identified via a Universally Unique Identifier (UUID), and can be linked together. ?? provides details and examples.\\\"\\n\\n### A general comment about benchmarking papers ###\\n\\nAs a benchmark suite, this seems like a good step in the right direction, and I am happy to increase my score based on that. However, my current score is calibrated to take novelty into account when comparing to other papers.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Good objective but weak tasks and baselines\", \"review\": \"The objective of this paper is to present a benchmark of code understanding tasks in the spirit of GLUE benchmarks in NLP. Towards this, it designs 5 Java language tasks: NPath complexity, operator prediction, method naming, completion of method calls, and null dereference prediction. An evaluation on some common neural architectures is performed.\\n\\nThe first weakness of the paper is that the benchmark tasks do not fulfil the stated objective of the paper. The main argument of the paper is that many approaches in the literature focus on local (intra-procedural) prediction tasks and use either sequence representations or structured representations (e.g., ASTs, control and data flow graphs). This paper seeks to present a benchmark of tasks which requires going beyond this, by requiring global (inter-procedural) analyses and structured representations. This is a good objective and the community would certainly benefit from such a benchmark. However, among the proposed tasks, except for the null deference analysis, none of the tasks particularly require global reasoning. The NPath complexity, operator prediction, method naming and code completion (whose special case focussing on method calls in considered in this paper) are local in scope and have been solved in the literature as such.\\n\\nFor the null dereference prediction task, the paper uses a static analysis tool, Infer, to obtain labels. Infer is stated to return an execution path exhibiting null pointer deference. However, the paper does not give the exact number or percentage of examples from the dataset in which the paths do span multiple methods. Among all the tasks and data points, only these can be said to be truly requring global reasoning; but these details are missing and compared to the entire benchmark, this represents a small fraction. The paper conjectures that method naming and method call completion can benefit from global reasoning, but offers no evidence to that effect. I also have some concerns about the call-graph precision and ambiguity in null dereference task; these are listed in the detailed comments below.\", \"this_leads_to_the_second_weakness_of_the_paper\": [\"the baselines. First, the baseline methods consider only sequence based models even though the paper explicitly wants to promote structured representations. They are also not tuned enough. Second, the paper does not use any global reasoning in the baselines. The paper would be more convincing if it were to show that such a global model outperforms local models. This would help to concretely claim that at least some of the benchmark tasks require global reasoning, including method naming and method call completion as conjectured by the authors. I also have other concerns above the experiments, which I list below.\", \"Now, some detailed comments:\", \"The abstract says that \\\"However, these models are commonly designed to perform well on a single task, failing to capture code\\u2019s multifaceted nature.\\\" I don't agree that just because a paper targets a single task, it fails to capture the multi-faceted nature of code. There are ample examples in the literature which take many views (e.g., ASTs, control flow, data flow, etc.) into account while solving a particular task.\", \"I like that the benchmarks come with pre-processed inputs in different formats. However, are the call-graphs over-approximate or under-approximate? Which call-graph construction algorithm is used? Does it construct context-sensitive call-graphs? It is important to spell out these details since different call-graph construction algorithms offer different precisions and these would impact the precision of the models build using those representations.\", \"There is an unresolved ref in Sec 2.2.\", \"The code completion task is restricted to method calls. Does this include predicting method names or their parameters also?\", \"The labels of the null dereference prediction task are tokens from the vocabulary (a classification problem). Such a token may occur in multiple places in the method. As stated in the paper, Infer provides the actual dereference token susceptible to null dereference. So this task should use a pointer that localizes the bug to the specific token, along the lines of \\\"Neural Program Repair by Jointly Learning to Localize and Repair\\\" (ICLR'19).\", \"The baselines are not tuned enough. There is no hyper-parameter search. The datasets vary in sizes and characteristics and would benefit from appropriate hyper-parameters.\", \"The paper uses a closed vocabulary of 10K. It should report on the prevelance of OOV tokens in inputs and output labels.\", \"The seq2seq baseline could benefit by an attention layer.\", \"There is no description of the task-specific layers in the Transformer baseline. The results for the completion task are not made available for review.\", \"The relative performance of the baselines on the NullToken task is surprisingly. The authors should explain this.\", \"I did not understand the argument against comparison with previous work in Sec 4.1.\", \"It seems that code deplication between training and test sets is not entirely ruled out. This should be fixed.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"GLUECode: A Benchmark for Source Code Machine Learning Models\", \"review\": \"This paper presents GLUECode, a benchmark for evaluating machine learning models of source code. GLUECode considers both global and local contexts of source code, and aims to help researchers experiment with multiple source code representations and evaluate their models. The authors also presented results of several baselines on the benchmark.\\n\\nMachine learning for source code has attracted a lot of interests in recent years. It is good to see a benchmark consists of 5000+ projects, which could help advance this area of research. The authors also performed some GLUECode tasks and presented results for several baselines, which show that there is ample room for progress on GLUECode. Overall, the paper is well written.\", \"concerns\": \"The proposed work considers both global and local contexts of code (the benchmark\\u2019s name is Global and Local Understanding Evaluation of Code). Section 2.1 also dedicates to this. However, it is not clear what global context is considered and how it is incorporated by the benchmark. In a ML for SE work, researchers may use various global contexts such as UML diagrams, library/API dependency, inter-procedural data/control flow, commit data, etc. It is not clear how these global context information can be satisfied by the benchmark. \\n\\nThe authors can also describe more about the unique advantages of using the proposed benchmark. Currently, they are already many public datasets released by various papers in this field (thanks to the open science policy). Also, it is easy for researchers to download a large amount of source code from open source websites (such as Github) themselves. They can also process the source code using existing static analysis tools to obtain the data they need and share the data. \\n\\nCurrently, GLUECode only provides a few types of source code representations. In recent years, researchers have proposed many different ways of representing source code tokens and ASTs. As an example, the following works use different AST-based source code representations (and it is not clear if the benchmark could provide necessary information to support these representations):\\nYao Wan, Zhou Zhao, Min Yang, Guandong Xu, Haochao Ying, Jian Wu, and Philip S. Yu. Improving automatic source code summarization via deep reinforcement learning. In ASE, pages 397\\u2013407. ACM, 2018.\\n\\nJ. Zhang, et al., A Novel Neural Source Code Representation based on Abstract Syntax Tree, In Proc. the 41th International Conference on Software Engineering (ICSE 2019), Montreal, Canada, 2019.\\n\\nThe data quality should be discussed in detail, as low quality data will bias the analysis results. This is particularly important for a public benchmark. For example, if the benchmark contains a lot of duplicated code, the follow-up analysis will be misleading. Furthermore, software evolves. Very soon, new versions/commits will emerge. It is not clear if the evolution will degrade the data quality and the validity of the benchmark. \\n\\nThe proposed benchmark data and code are not available for replication purpose.\\n\\nIn Table 2, the baseline result for Transformer-based method completion is missing. \\n\\nThe paper is generally well-written. There are a few typos. For example:\\n\\nIn page 3, \\u201d?? provides details and examples...\\u201d\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
TmUfsLjI-1 | Which Model to Transfer? Finding the Needle in the Growing Haystack | [
"Cedric Renggli",
"André Susano Pinto",
"Luka Rimanic",
"Joan Puigcerver",
"Carlos Riquelme Ruiz",
"Ce Zhang",
"Mario Lucic"
] | Transfer learning has been recently popularized as a data-efficient alternative to training models from scratch, in particular in vision and NLP where it provides a remarkably solid baseline. The emergence of rich model repositories, such as TensorFlow Hub, enables the practitioners and researchers to unleash the potential of these models across a wide range of downstream tasks. As these repositories keep growing exponentially, efficiently selecting a good model for the task at hand becomes paramount. We provide a formalization of this problem through a familiar notion of regret and introduce the predominant strategies, namely task-agnostic (e.g. picking the highest scoring ImageNet model) and task-aware search strategies (such as linear or kNN evaluation). We conduct a large-scale empirical study and show that both task-agnostic and task-aware methods can yield high regret. We then propose a simple and computationally efficient hybrid search strategy which outperforms the existing approaches. We highlight the practical benefits of the proposed solution on a set of 19 diverse vision tasks. | [
"model",
"needle",
"models",
"haystack",
"haystack transfer learning",
"alternative",
"scratch",
"particular",
"vision",
"nlp"
] | Reject | https://openreview.net/pdf?id=TmUfsLjI-1 | https://openreview.net/forum?id=TmUfsLjI-1 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"TyDXRoU29Yl",
"r7BM1yDl2rC",
"yCFbNhLJmpG",
"kCEJfZGqjGU",
"26uo1ZWTvt",
"dii8uDOz4YU",
"M3PIhWATVbj",
"C4_UBOR7jWz",
"quU7eawAk_",
"sw1CohP7e4o",
"prEXPl82hel",
"9E8ZWMJsska"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040475667,
1606303727280,
1606275732179,
1606170778062,
1605789240480,
1605789105997,
1605788676863,
1605788434517,
1604295132557,
1603835861193,
1603114768288,
1602913925835
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3634/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3634/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3634/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3634/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3634/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3634/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3634/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3634/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3634/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3634/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3634/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper studies efficient strategies for selection of pre-trained models for a downstream task. The main concerns consistently raised by the reviewers were limited methodological novelty, insufficient experimental analysis, unclear findings, and positioning of the paper with respect to related work that was ignored in the initial version. After the author response, R4 raised the score to borderline accept (still indicating the paper is weak without proper comparisons with other methods), whereas all other reviewers remained negative. The paper does have merits, as the methods are simple, and the problem is very practical (and somewhat understudied). However, the AC agrees with the majority that the paper is not ready for ICLR. The novelty is limited and the paper would benefit from more experiments, such as comparisons with simple baselines like early stopping as indicated by R1 and R3, and other methods such as Task2vec which address the same problem. The authors are encouraged to revise the paper according to the reviewers comments and submit it to another top conference.\"}",
"{\"title\": \"Discussion\", \"comment\": [\"We thank the reviewer for his comments. However, we disagree with the proposed suggestions as they are either orthogonal to our method, or have been shown as inefficient, by other related work.\", \"LEEP: This method assumes a pre-trained classification head as part of the pre-trained model, which works for fully/semi-supervised. However, some models from the VTAB benchmark do not fulfill this requirement! (e.g., feature representations originating from a GAN or VAE which are part of the models in the pool)\", \"Batch Norm requires one to adjust the parameters to the new data distribution. Reporting the behaviour of using restricted finetune compute to estimate unrestricted finetune compute accuracy is possible, however that is clearly as (if not more) sensitive of hyperparameter selection. Instead, we focused on doing an extensive study of methods which are known to be less sensitive to hyperparameter search to identify the best models, which are then used with larger finetune compute.\", \"Whilst one could use \\u201cMINE\\u201d, it is known that estimating the MI suffers considerably from a bias - variance tradeoff and no method bypasses those issues (see \\u201cPoole, B., Ozair, S., Van Den Oord, A., Alemi, A., & Tucker, G. (2019, May). On Variational Bounds of Mutual Information. In ICML.\\u201d). The suggested approach \\u201cMINE\\u201d falls into this category and additionally has a large computational requirement as it requires to train a neural network which makes it unusable as a \\u201ccheap\\u201d proxy task.\", \"We restricted ourselves to the 1K samples setting from VTAB on purpose, as it is known that this yields the largest gain in using transfer learning compared to training from scratch. The impact of the size of the model pool is directly visible via the novel definition of regret which we consider as a major contribution compared to other performance proxies.\"]}",
"{\"title\": \"Response after Rebuttal\", \"comment\": [\"I thank the authors for their response. However, I am not at all satisfied with the response and hence vote for rejecting the paper because of limited novelty and unconvincing experiments. Below are some of the changes /experiments that I think must be included in the paper before acceptance.\", \"LEEP can be applicable to any classification models whether it is fully supervised or semi-supervised. To the best of my knowledge, It only requires classification head. So, why it can not be applied to VTAB models is not clear to me.\", \"I agree that early stopping is an open research problem. However, it can be used as a simple baseline in the comparison, e.g., finetuning with only 1 epoch or finetuning with only 10 epochs. I don't agree with the authors that finetuning requires when the batch norm layers are being used. Why can't I finetune a model for 5 epoch without knowing anything about BN layers and then use the premature models\\u2019 test accuracies as ranking scores.\", \"Also, why can't we simply rank the checkpoints by their mutual information between the high-dimensional features and discrete labels of the downstream task? It is a variational lower bound parameterized by a neural network. See: \\\"Belghazi et al. Mine: mutual information neural estimation. arXiv preprint arXiv:1801.04062, 2018\\\".\", \"Moreover, How does amount of data in the target task affects the performance of ranking? What is the effect of number of pretrained models on the ranking?\", \"Based on the above missing experiments, I am keeping my score same as the initial one.\"]}",
"{\"title\": \"Response to rebuttal\", \"comment\": \"Thanks for your replies!\\n\\nFrom your reply, I think this paper would be much stronger if you include empirical comparisons to Task2Vec, or improve Task2Vec. I am increasing my rating to borderline accept, but to be honest, the current version would still be a weak paper without such comparisons.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for the thoughtful comments and are grateful to see that the simple yet consistent proposed solution is appreciated.\\n\\n_[Unfortunately, it is only on image data; it would have been great if the authors had used an example from NLP too.]_\\n\\nWe agree that extending this work to include NLP tasks is an important followup project. At this stage, fine-tune strategies on its own are less explored and unclear in NLP compared to vision tasks (e.g., pre-trained models are typically extended by more than one linear classification layer, or the representations are not always taken from the last pre-logit layer).\\n\\n_[Although it does not necessarily outperform the linear algorithm in Figure 6.]_\\n\\nWe see Figure 6 (which is now Figure 7 in the revised version) as a confirmation of some of our claims, rather than a weakness:\\n\\n\\n\\n* Since hybrid with budget B is based on linear on budget B-1, it is **expected that the graphs are fairly similar when experts are the best models** (which is the case in restricted pools)\\n* It shows the **necessity of including the task-agnostic method** - on ALL pool we clearly see that linear is not able to choose the best model\\n* We want to emphasize that Figure 6 cares only about winners, which differs from the rest of the paper where we look at the notion of regret\\n\\n_[The ensembling approach also does not seems to be the optimal solution. The authors could study the generalization performance of the hybrid algorithm to provide further insights.]_\\n\\nThe generalization property of the hybrid strategy using a budget of B, by definition, comes directly from the generalization of the task-aware strategy for B-1 and the first pick of the task-agnostic strategy. Giving any generalization bounds for training or fine-tuning deep neural networks is already a hard and currently open research problem, which we do not aim at solving here. Instead, we showcase empirical failures of each strategy which, interestingly, do not overlap much, enabling consistent improvements using the proposed hybrid strategy.\\n\\n_[An empirical run-time analysis is missing.]_\\n\\nEfficient implementation of all methods, which is crucial for a fair comparison, is out of the scope of this work. At this stage, understanding the limitations of using task-aware (especially a linear classifier) or task-agnostic search methods is the founding block. Run-time analysis, extension towards related work that provides cheaper estimators (such as LEEP (Nguyen et al., 2020), NCA (Tran et al., 2019) or H-Score (Bao et al., 2019)), in order to speed up the entire search process, are all valid future projects that our work opens.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": [\"We thank the reviewer for the thoughtful comments and pointing out the potentially relevant related work. We have updated the main body of the paper, in particular the \\\"Background\\\" section and the \\\"Other related work\\\" section to improve the positioning of our work, and added Figure 2 for further clarification. That being said, we believe that our contributions are relevant to the research community and the practitioners.\", \"[Related methods]\", \"Task2Vec (and Model2Vec) (Achille et. al.):\", \"We highlight multiple aspects when comparing \\u201cmeta-learned task-aware\\u201d strategies such as Model2Vec to our choice for search strategies (also described in the revised paper)\", \"[Computational comparison] Assuming access to M models, our proxy tasks certainly require more compute time than Model2Vec, O(M) compared to O(1) for the search part. However, **the computational requirement for Model2Vec is shifted to the meta-learn algorithm** (which needs to be rerun from scratch whenever new pre-trained models are available - a **different category of model search strategies than what we examine**), which has an asymptotic complexity of O(M) for Model2Vec.\", \"[Hybrid helps Model2Vec] Figure 3 in the Task2Vec paper shows that the **generalist model outperforms the selected expert in 15 out of 50 cases**, sometimes by a large margin - **hybrid strategy should improve their proposed method significantly**, formulating a very interesting future research problem.\", \"P2L (Bhattacharjee et. al.):\", \"Estimates the impact of transferring the learned representations from a source dataset (unclear how one could distinguish multiple models trained on the same dataset) by incorporating the dataset size and the divergence between the upstream and downstream dataset.\", \"This is **orthogonal** to the goal of **selecting a pre-trained model without having knowledge of the meta-data used to train the model in the first hand.**\", \"Taskonomy (Zamir et. al.):\", \"**We can use any model**, whereas Taskonomy is restricted to models trained on the same input with different labels.\", \"**We examine fine-tuning**, whereas for both methods the encoder part which is transferred is not fine-tuned.\", \"While Task2Vec and Model2Vec are directly applicable by ranking models based on their similarity in the embedding space, it's unclear how to apply the model search on new tasks using Taskonomy without semantic relations between the upstream and downstream tasks and without training a new network from scratch.\", \"_[The approach is not particularly novel. There is also no novel technical insight that explains the results.]_\"], \"we_respectfully_disagree_with_the_lack_of_technical_insights_gained_in_this_work\": \"* The failure of task-aware and task-agnostic methods seems to be non-overlapping, depending on the model pool inspected. This is seen through the success of our hybrid approach, a fact that is very surprising\\n* **The hybrid approach is universal**, e.g., Task2Vec would also benefit from including it\\n\\n_[The use of the JFT dataset hampers reproducibility since the dataset is not public. I'd like to see results with JFT excluded.]_\\n\\nThe pool ImageNetAccuracies in the appendix does not contain any proprietary model, and it confirms the improvement of the hybrid strategy over the linear proxy task.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"We thank the reviewer for the thoughtful comments.\\n\\n_[The major weakness of this paper is that it seems there is no consistent strategy to out-perform all other methods in every task.]_\", \"we_respectfully_disagree\": \"Hybrid strategy is **consistent across all pools** as seen in Figure 6 in the revised draft (previously Figure 5) and Figure 17 in the revised supplementary (previously Figure 16).\\n\\n_[Even if for the advanced strategy hybrid proposed in the latter part of this paper, the optimal pick for this hybrid method is almost identical to the linear evaluation in task-aware strategy in ResNet-50 and expert model pools.]_\\n\\nThis is correct and expected, since a hybrid strategy with B models receives B-1 top models from linear evaluation. The challenge is in fact to return the experts when needed (which is something other methods are usually not capable of), whilst not missing the generalist model when they perform best.\\n\\n_[So my biggest concern for this paper is that we don't have a take-home message, other than showing the \\\"No-Free Lunch Theorem\\\" in model selection.]_\", \"we_believe_that_there_are_several_take_home_messages\": [\"This is the **first work** that **formulates** the model-search question with a notion of **regret**, on **real-world scenarios** that are most common in practice for **the end user** - choosing the best model with minimal computational demands\", \"We carefully examine cases in which either Task-agnostic or Task-aware methods fail. It is expected that in general they perform well, however, one would like to understand how strong the failures are. In particular, we observe that **failures do not overlap too much**, which is why we believe that this large-scale study sheds a new light on these failures\", \"Finally, we propose a **simple**, **computationally feasible** search strategy - **hybrid** - that captures these failures under one umbrella, consistently outperforming other methods due to the above mentioned property of failures not overlapping often\", \"_[An interesting direction might be, how we can design a really fast approximation of fine-tuning, so that we can evaluate a model's fitness only by a few iterations (within a very short amount of training time), instead of performing full fine-tuning on the target task.]_\", \"We believe that this is out of the scope of our work since understanding the correct method for early stopping is an interesting and difficult problem on its own. There are several reasons for not including such a study in our paper:\", \"The complexity coming with such an approach which goes way beyond training a linear layer. Both proxy tasks that we analyzed **require only a forward pass** (inference) through the pre-trained networks in order to get the representations\", \"Training a large network (or fine-tuning it) robustly with early stopping for mischosen hyperparameters or initialization parameters is an open research problem\", \"Fine-tuning requires additional knowledge about the architecture (e.g., knowing when the batch norm layers are used) of the networks, which a typical user would not be able to grasp for all the pre-trained embeddings. For our proxy task, **no such knowledge is required** until fine-tuning the winning models\"]}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank the reviewer for the thoughtful comments and pointing out the potentially relevant related work. We have updated the main body of the paper, in particular the \\\"Background\\\" section and the \\\"Other related work\\\" section to improve the positioning of our work, and added Figure 2 for further clarification. That being said, we believe that our contributions are relevant to the research community and the practitioners.\\n\\n[Contributions]\\n**Hybrid approach** is not the main and only contribution, but a byproduct of a careful large scale analysis of current available model search strategies.\\n\\n[Not limited to classification]\\nThe proposed method is **not limited to models pre-trained with classification heads, even though we focus only on classification tasks.** Note that VAE and BIGGAN-based models are included (Table 3 of the Appendix). \\n\\n[Related methods] \\nWe thank the reviewer for helping us place the work into the more broad research context! While these methods are indeed related in the general sense, they are **not directly comparable to our method:**\\n\\nDuality diagram similarity (DDS) and DEPARA:\\n - **We can use any model**, whereas Taskonomy is restricted to models trained on the same input with different labels.\\n - **We examine fine-tuning**, whereas for both methods the encoder part which is transferred is not fine-tuned.\\n - The methods aim at computing the results of the Taskonomy dataset faster by only working on a subset of the dataset (the same data for all models).\\n - Adapting DEPARA and DDS to learn some dependency between tasks or models similarly to Task2Vec or Model2Vec would make it a \\u201cmeta-learned task-aware\\u201d strategy, on which we elaborate in the revisited paper.\\n - While Task2Vec and Model2Vec are directly applicable by ranking models based on their similarity in the embedding space, it's unclear how to apply the model search on new tasks using DDS or DEPARA (or taskonomy in general) without semantic relations between the upstream and downstream tasks and without training a new network from scratch.\", \"leep\": [\"By including the VTAB models, **we have access to models pre-trained with different loss functions, fully unsupervised to semi-supervised and strongly supervised**, whilst LEEP is only applicable to models pre-trained for classification tasks\", \"LEEP provides theoretical guarantees only for fixed features, even though fine-tuning outperforms the linear classifier in their work.\", \"Linear correlation is not necessarily transitive, which is why LEEP cannot be used directly to derive the relationship between fine-tuning and linear classifier. Hence, we **go one step further than LEEP** in understanding this behaviour.\", \"[Other suggestions]\"], \"early_stopping\": [\"Training a network robustly with early stopping is an open research problem\", \"Fine-tuning requires additional knowledge about the architecture (e.g., knowing when the batch norm layers are used). For our proxy task, **no such knowledge is required**.\"], \"impact_of_the_dimension\": \"- Already shown in Appendix for Pool **_DIM2048_**. The results for this model pool are **consistent** with the pool **_RESNET-50_** in the main body of the paper and were hence relegated to supplementary material. \\n\\nMutual Information (MI) based approaches:\\n - MI could be useful, but there is no clear understanding on how to correctly estimate the MI and its relation to transferability.\\n\\nWe are hopeful that, together with the thoroughly revised Section 2, we addressed the major concerns.\"}",
"{\"title\": \"Reject due to limited novelty and lack of convincing experiments\", \"review\": \"This paper presents a large scale empirical study on pretrained model selection for transfer learning and show that a hybrid approach that combines task-agnostic and task-aware methods outperforms the existing approaches on VTAB benchmark. The paper is well written and easy to follow. Experiments using 46 pretrained models and 19 downstream tasks show the effectiveness of the hybrid strategy in selecting the right model for transfer learning with low computational complexity.\\n\\nOverall, I vote for rejecting the paper as the paper has very limited novelty and experiments are not convincing. In particular, I fail to find the major contributions of the paper except the empirical study on VTAB dataset. While papers related to empirical study are interesting and worth of acceptance, this paper does not provide any major insight that could be useful for the future research on transfer learning. Furthermore, many experiments and comparisons are missing which should be included in the paper for a better understanding of the empirical study.\", \"how_is_the_proposed_hybrid_ranking_strategy_comparable_to_the_model_selection_approaches_presented_in_duality_diagram_similarity\": \"a generic framework for initialization selection in task transfer learning, ECCV 2020; DEPARA: Deep Attribution Graph for Deep Knowledge Transferability, CVPR 2020. These papers should be clearly discussed with possible comparisons in the experiments to show the advantage of the hybrid approach.\\n\\nComparison with many simple baselines are missing in the paper. E.g., How does the hybrid strategy comparable to fine-tuning with early stopping. Can we select pre-trained models by finetuning for only few epochs? How does the number of epoch affect the final performance while comparing to the hybrid strategy?\", \"how_is_the_current_method_comparable_to_leep\": \"A new measure to evaluate transferability of learned representations? Experiments and analysis should be included in the experiments to verify the effectiveness of the hybrid strategy.\\n\\nHow does the size of representation/feature affect the final performance? Does the conclusion still hold with different size of features? How does amount of data in the target task affects the performance of ranking? What is the effect of number of pretrained models on the ranking?\\n\\nMutual information between the features and discrete labels of the downstream task can be used to rank different models for transfer learning. How does the proposed hybrid strategy related to mutual information based ranking strategy? Experimental comparison should be included in the paper to verify this.\\n\\nDoes the ranking strategy and analysis presented in the paper limited to only classification models? In particular, can models trained using self supervised learning where there is no classification head, be used as pre-trained models in the current approach? How does the analysis change while considering self-supervised models which are now-a-days quite popular in representation learning? What about considering discriminators of generative models, e.g., VAE or BigGAN in the transfer learning analysis? More experiments and analysis should be performed in the experiments.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Unclear findings\", \"review\": \"[Summary] This paper presents a large-scale study on model-selection strategies for transfer learning, by performing task-agnostic and task-aware strategies on a large number of models evaluated on a diverse range of tasks.\\n\\n[Strength] The problem setting is novel and interesting. The proposed quantitative measurement of the quality of selected models, named \\\"regret\\\" is well designed.\\n\\n[Weakness] The major weakness of this paper is that it seems there is no consistent strategy to out-perform all other methods in every task. Intuitively, task-aware strategies should be better than task-agnostic strategies, but they perform similarly (almost equally) in all model pools, which is quite surprising.\\n\\nEven if for the advanced strategy hybrid proposed in the latter part of this paper, the optimal pick for this hybrid method is almost identical to the linear evaluation in task-aware strategy in ResNet-50 and expert model pools.\\n\\nSo my biggest concern for this paper is that we don't have a take-home message, other than showing the \\\"No-Free Lunch Theorem\\\" in model selection. So, I hope the authors could re-emphasize what we really learn from this large-scale study.\\n\\nAn interesting direction might be, how we can design a really fast approximation of fine-tuning, so that we can evaluate a model's fitness only by a few iterations (within a very short amount of training time), instead of performing full fine-tuning on the target task.\\n\\nConsidering this limited effective information from this paper, I think it's not suitable for publishing.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good baselines for model selection, but paper ignores many prior papers on this problem\", \"review\": \"Paper summary: This paper looks at the problem of efficiently choosing pre-trained models as initialization for downstream target tasks. It compares 3 strategies, a task-agnostic one which uses imagenet accuracies, a task-aware one which uses the acccuracy of linear classifiers on fixed representations, and a hybrid one which combines the two.\", \"pros\": [\"The evaluation is fairly thorough. I especially like the fact that the authors consider the different axes along which pre-trained models differ (model capacity, generalist/experts etc.)\", \"The pool of downstream datasets is large.\", \"The suggested strategy is simple and easy to implement.\", \"The problem is significant in practice since almost all practical applications of neural networks have this prroblem, and the gains seem large. I wish there was more work on this problem.\"], \"cons\": \"- The biggest issue is that this paper ignores several important papers publiished before on this problem. Especially of note is the Task2Vec approach, which computes model and task embeddings. I would like comparisons both in terms of accuracy/regret as well as computational cost:\\nAlessandro Achille, Michael Lam, Rahul Tewari, Avinash Ravichandran, Subhransu Maji, Charless C. Fowlkes, Stefano Soatto, Pietro Perona; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 6430-6439\", \"other_papers_that_are_also_relevant_and_should_be_cited_and_comparisons_discussed\": \"Bishwaranjan Bhattacharjee, John R. Kender, Matthew Hill, Parijat Dube, Siyu Huo, Michael R. Glass, Brian Belgodere, Sharath Pankanti, Noel Codella, Patrick Watson; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 760-761\\n\\nAmir R. Zamir, Alexander Sax, William Shen, Leonidas J. Guibas, Jitendra Malik, Silvio Savarese; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 3712-3722\\n\\n- The approach is not particularly novel. There is also no novel technical insight that explains the results.\\n\\n- The use of the JFT dataset hampers reproducibility since the dataset is not public. I'd like to see results with JFT excluded.\\n\\nFor acceptance, I would definitely want to see the first of these convincingly addressed.\\n[Updated rating]\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A Hybrid (Ensemble) Approach for Pretrained Model Search\", \"review\": \"### Summary\\n\\nThe paper evaluates three procedures for selecting models for transfer learning. The choices are task-agnostic selection, linear training, and the hybrid approach. They empirically show that the hybrid algorithm works the best on few-shot learning on images.\\n\\n### Feedback\\n\\n* The paper is a straightforward paper and easily understandable. The message is practical, but not very surprising. Unfortunately, it is only on image data; it would have been great if the authors had used an example from NLP too.\\n* The hybrid approach is super-simple, which is nice. The results in Figure 6 confirm that how its ensemble nature helps. Although it does not necessarily outperform the linear algorithm in Figure 6. The ensembling approach also does not seems to be the optimal solution. The authors could study the generalization performance of the hybrid algorithm to provide further insights.\\n* An empirical run-time analysis is missing.\\n* While the authors indicate that all models perform comparatively poorly on the structured tasks, they do not provide specific insights about the root cause of this.\\n* Overall, the idea is simple and practical, but the methodological contributions of this paper is rather limited.\\n\\n--------\\n### Post-Response Update\\nUnfortunately, the authors' response is not satisfactory on multiple issues. Thus, I reduce my rating by one point.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
Dmpi13JiqcX | Disentangling Representations of Text by Masking Transformers | [
"Xiongyi Zhang",
"Jan-Willem van de Meent",
"Byron C Wallace"
] | Representations in large language models such as BERT encode a range of features into a single vector, which are predictive in the context of a multitude of downstream tasks. In this paper, we explore whether it is possible to learn disentangled representations by identifying subnetworks in pre-trained models that encode distinct, complementary aspects of the representation. Concretely, we learn binary masks over transformer weights or hidden units to uncover the subset of features that correlate with a specific factor of variation. This sidesteps the need to train a disentangled model from scratch within a particular domain. We evaluate the ability of this method to disentangle representations of syntax and semantics, and sentiment from genre in the context of movie reviews. By combining this method with magnitude pruning we find that we can identify quite sparse subnetworks. Moreover, we find that this disentanglement-via-masking approach performs as well as or better than previously proposed methods based on variational autoencoders and adversarial training. | [
"disentanglement",
"model pruning",
"representation learning",
"transformers"
] | Reject | https://openreview.net/pdf?id=Dmpi13JiqcX | https://openreview.net/forum?id=Dmpi13JiqcX | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"PdPAfR3uwCE",
"kUKcooJZWHv",
"-oVoSwrX5S5",
"-hkkYOtrT8i",
"8J1uz_8GIM7",
"mBKjmCFkSl6",
"wg2g-2jMsly",
"BawWjLEgF5k",
"pZXSvm1NEw",
"dWdl7GTuvUv",
"iAbUhllI7EC",
"0DZtJZeDaDY",
"r-ps45B8POr"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040471988,
1605821401335,
1605821344723,
1605449376684,
1605445577017,
1605300709368,
1605300418434,
1605298858239,
1605297321858,
1604272463593,
1603955665812,
1603946376942,
1603806785285
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3631/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3631/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3631/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3631/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3631/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3631/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3631/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3631/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3631/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3631/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3631/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3631/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper explores a methodology for learning disentangled representations using a triplet loss to find subnetworks within a transformer. The authors compare against several other methods and find that their method performs well without needing to train from scratch. The reviewers thought this paper was well written and the authors were very responsive during the review period. However, there were some questions about the experimental setup and empirical performance of the paper, leaving the reviewers wondering if the performance was convincing. We agree that there is value in exploring disentangled representations even if they do not necessarily improve performance (as the authors point out), but clearly explaining the reasoning behind all analyses (e.g. specifically choosing domains to introduce a spurious correlation), and justifying differences in performance is particularly important in these cases.\"}",
"{\"title\": \"References added\", \"comment\": \"We have updated our paper and included HUBERT in our related work section.\"}",
"{\"title\": \"VAE baselines added\", \"comment\": \"We have updated the sentiment/genre experiment (Section 3.1) to include two VAE baselines. As shown in figure 2 and table 2, our methods outperform both baselines.\"}",
"{\"title\": \"Reply to Reviewer#3's response\", \"comment\": \"The purpose of the work is to devise a method for disentangling representations of text, and one important potential benefit of such methods is making models more robust, i.e., less reliant on spurious correlations. The method is therefore general in that is appropriate for any application in which robustness is a concern, which in practice is most cases.\"}",
"{\"title\": \"About the response from authors\", \"comment\": \"The response seems reasonable. However, it raises a new question. Since the authors carefully selected the data and designed specific downstrean tasks, How to ensure or reflect the generality of the proposed method. That is, what's the meaning of the proposed work. It is just for specific cases or for general cases.\"}",
"{\"title\": \"Response to reviewer #3\", \"comment\": \"We thank the reviewer for their comments and provide clarifications and responses to specific questions below.\\n\\n**The experimental setup is not convincing. Why only pick the two genres of drama and horror?**\", \"perhaps_we_were_not_clear_enough_in_motivating_our_experimental_setup_here\": \"We are interested in examining the degree to which disentanglement via different methods affords robustness to reliance on spurious correlations, such as (in this example) associating a particular genre with a specific sentiment (here, e.g., horror with negative sentiment). In many cases, it is reasonable to assume that either we do not want to rely on certain correlations like this for reasons of fairness, and/or the conditional distributions \\u2014 p(sentiment|genre) \\u2014 may shift in the test distribution. With this framing in mind, we selected Drama and Horror because among all the major genres, these two genres of reviews have the most correlation between genre and sentiment: Drama reviews are more likely positive and Horror negative. This is to create a spurious correlation between the genre and sentiment, so that we can probe for robustness to the same.\\n\\n\\n\\n**Figure 3 does not show that the proposed method achieves better results than the finetuned baseline**\\n\\nWe feel there is a misreading of the figure here (perhaps we could improve the presentation and description). Figure 3 does show our models outperforming the finetuned baseline, with respect to the representations that it induces. When trained on sentiment (upper row), the representations from the finetuned model are still clustered with respect to genre (marker shape); this clustering is not observed using the proposed masking approaches. When trained on genre (bottom row), the two genres are not well separated; whereas the representations from our two models, although still imperfect, are clearly better separated than the finetuned model. This also aligns with the quantitative results in Figure 2 and Table 2.\\n\\n\\n\\t\\n**How to use these disentangling representations in downstream tasks, such as text classification, natural language inference, and semantic similarity? It is better to discuss and conduct experiment to show the advantages of their disentangling representations in downstream tasks.**\\n\\nWe did report the STS-Spearman correlation in Sec 3.2, which is a semantic similarity benchmark. And in Sec 3.1, we designed a specific text classification task with two correlating attributes, which we believe demonstrates the advantage of our model over other baselines with respect to robustness, i.e., performing well in situations where other methods result in overreliance on artifacts (spurious correlations) in the training set. We feel these experiments do show the important robustness advantages of this approach in downstream tasks.\"}",
"{\"title\": \"Response to Review #1\", \"comment\": \"We thank the reviewer for their detailed comments and suggestions, and respond to all concerns below.\\n\\n**I wish the authors performed their first experiment on more domains: books, music, etc. and consider more than two labels. From current results, it's hard to confidently conclude that this approach is generalizable.**\\n\\nWe note that we did perform experiments on two different types of datasets, corresponding to (quite) different tasks; we thought this would be more compelling than a suite of experiments on sentiment tasks. We would argue that the fact that our model does well on these two considerably distinct datasets/tasks indicates that it has reasonably good generalizability. We will consider including additional experiments on more datasets for a camera-ready version, however.\\n\\n**Judging based on Figure 4 results I'm not convinced that the proposed approach does better than the finetuned (which I believe has a trained classifier on top of BERT) approach especially for Semantic tasks. Perhaps a discussion/ error analysis would be appropriate given better results on Syntax tasks.**\\n\\t\\nThe reviewers\\u2019 observation is correct: our model performs roughly on par with the fine-tuning method in Figure 4 (arguably a bit better, but the objective is multivariate so it is hard to say). But we would highlight that the main purpose of this paper is to provide a new way of looking at the problem of learning disentangled representations; in Figure 3 we can see that the representations learned using the finetuned approach fail to achieve the level of disentanglement enjoyed by the proposed approach. And again, ours are learned without modifying the BERT weights, which we think is an interesting finding. Furthermore, in the robustness experiments (Figure 2) we show that the fine-tuned approach fares considerably worse than the proposed approach. \\n\\n**Also a discussion on the results for masking weights vs. masking hidden units is missing. If I'm not mistaken, mathematically, hidden unit masking is a subset of weight masking, where masking an item in hidden activation is equivalent to masking an entire column in the weight matrix?**\\n\\t\\nIn principle, the reviewer is correct: masking hidden units is technically a subset of weight masking. Effectively masking hidden representations is a strategy by which to select grouped sets of weights to mask simultaneously (i.e., all associated with individual nodes), whereas weight masking has more flexibility and no such grouping of masked weights. We therefore think it is intuitive conceptually to think about these as distinct strategies. And from an optimization point of view, masking hidden units may be easier to optimize.\\n\\n**Reply to comments:**\\nWe thank the reviewer for the references and will add them to the related work section in the updated version.\"}",
"{\"title\": \"Response to Review #2\", \"comment\": \"We thank the reviewer for their detailed and insightful comments. First, we would like to make one clarification on the reviewer's comment:\\n\\n **The triplet loss as formulated in this work seems to make it possible to disentangle only two factors of variation (a) and (b).** \\n\\nIt is true we have only shown the approach for two factors, but the method is sufficiently general to be amenable to additional factors, though one would have to construct triplets for all-pairs, which would not scale well to a large number of factors. We view extensions to such cases as an interesting direction for future work.\\n\\nWe respond to all questions and comments below.\\n\\n**Response to Questions & Comments**\\n1. **What would performance look like if masks were trained after fine-tuning on sentiment/genre classification? Rather than training masks directly on top of BERT-base. It would be interesting to see if the model is stable to recover from fine-tuning on data with spurious correlations and still produce disentangled representations.**\\n\\nThis is an interesting question, and something we did not try. We would conduct more experiments to verify this and update our paper with the results as soon as possible. We are thankful for the suggestion!\\n\\n2. **Is every single weight/activation masked at every transformer layer? The paper seems to lack some specifics about exactly what layers/weights are masked. Along these lines, did you experiment with masking only the last few layers? This could save time & parameters**\\n\\nThe reviewer is correct to point out that we should have been more explicit about this; We only mask the last 9 layers of the model (which we found in preliminary experiments on dev data to work well). It appears we omitted this implementation detail and we will clarify this. \\n\\n3. **In Figure 3 is the model training with L_{cls} corresponding to sentiment and then visualized for sentiment and genre? Or is the top trained with the supervised sentiment loss and the bottom for supervised genre loss?**\\n\\nThe latter; the top is trained with sentiment loss and the bottom genre loss. \\n\\n4.**It would be interesting to explore an L1 penalty on the masks for increasing sparsity, possibly in conjunction with magnitude pruning as well.**\\n\\nThank you for this suggestion. We did have an L1-penalty that discourages the mask for different attributes to be on (equal to one) in the same position. The idea there is to encourage mutually exclusive masks. But it would be interesting to also explore an L1 penalty on the masks to improve overall sparsity. We plan to do more experiments on that for the camera-ready version. More generally, we hope this approach suggests a line of alternative sparsity-inducing methods for disentanglement via masking.\\n\\n5. **The WC task doesn't feel very representative of sentence \\\"semantics\\\"**\\n\\nWe agree that the degree to which a representation captures \\u201csemantics\\u201d is hard to measure (or even define). We here follow the setting established in prior work (Conneau et al., 2018) regarding the \\u201csemantic\\u201d probing tasks. It captures a lexical level of \\u201csemantics\\u201d, which is often used as a substitute for the real \\u201csemantics\\u201d. Note that we also use a semantic-similarity task (STS) to better capture the sentence level semantics.\"}",
"{\"title\": \"Response to Review #4\", \"comment\": \"We thank the reviewer for their comments, and we are glad the reviewer found this new direction to be interesting.\\nWe would like to address two of the main concerns raised in this review, before responding to specific questions:\", \"lack_of_comparison_to_variational_auto_encoders\": \"We believe this may be a misunderstanding on the part of the reviewer. We in fact compare with VGVAE, which is a VAE model, in our experiments involving disentangling semantics and syntax. We show results in Figure 4, which demonstrates that the proposed method outperforms VGVAE. Because this VGVAE model was specifically designed for disentangling semantics and syntax, we did not include comparisons to it in the sentiment/genre experiment. We agree that a comparison to a (different) VAE-based baseline in the sentiment/genre experiment would strengthen the work, and plan to update the paper with the results from this as soon as possible.\\n\\nThe reviewer correctly points out that we do not achieve \\u201cstate of the art\\u201d (SOTA) on any particular standard benchmark dataset. However this is not the primary aim of this paper. Our interest here is to learn disentangled representations, and there is no existing benchmark for disentanglement in NLP. We aim to achieve this in service of robustness, e.g., to make models less sensitive to spurious correlations (as mentioned by R2). Our experiments are therefore designed to probe the degree to which the proposed approach achieves disentanglement (and robustness); we think (as does R1) that the results are convincing in this respect. \\n\\nMore generally, while achieving SOTA on benchmark datasets is one means of showing the value of particular methods or approaches, we argue that we should not, as a community, require all research to be focussed on topping leaderboards. For example, this would largely preclude any work on robustness and interpretability, which are key open problems in NLP and ML more broadly. (See also Ethayarajh and Jurafsky, 2020: https://arxiv.org/abs/2009.13888).\", \"response_to_your_other_questions\": \"\", \"q1\": \"Would training binary masks be a speedup over fine-tuning?\\nThere is no reason to believe that training binary masks will be faster than fine-tuning the model. However, binary masks do have the advantage of requiring less memory, which is often at a premium in GPU-based computations. .\", \"q2\": \"Does using the pretrained model (vs. one trained from scratch) help?\\n\\nIn our approach we do not fine-tune BERT, so if we randomly initialized this the method would not work. The insight here is to uncover existing subnetworks that yield disentangled representations from pretrained models, so training BERT from scratch would not be a viable approach here.\", \"q3\": \"Have you considered masking a subset of the weights/activations (e.g. only in the last layer)?\\nGreat question. We only mask the last 9 layers of the model (which we found in preliminary experiments on dev data to work well). It appears we omitted this implementation detail and we will clarify this.\", \"q4\": \"Do you have any intuition about the learned masks? E.g. are most weights/activations being removed? How much overlap is there between the masks learned for each attribute?\\nFor Sec 3.1 and Sec 3.2, only a small percentage of the weights/activations are masked (~1%) at convergence. We note this is an interesting finding in its own right; apparently masking out a small fraction of weights can substantially affect the degree of disentanglement. The overlap is very small between the two attributes (close to 0), which is intuitive.\"}",
"{\"title\": \"New approach to disentangling representations\", \"review\": \"**Summary**:\\n\\nThe paper proposes a procedure to extract disentangled representations from pretrained BERT models. In particular, the paper proposes learning binary masks over BERT weights (or, as an alternative, over BERT activations) such that the resulting representations correspond to the desired aspect representations. The model requires additional supervision (binary labels or example triplets) and training (for the masks but not the BERT weights). The experiments aim to perform disentangling to ensure that (1) the learned representation does not \\u201cleak\\u201d a potentially sensitive attribute, and (2) the downstream classifier\\u2019s performance is good across all subgroups formed by the attributes. The experiments show that the proposed method outperforms baselines such as unmasked BERT, unmasked-but-finetuned BERT, and unmasked-but-adversarially-finetuned BERT.\\n\\n**Concerns**:\\n\\nThe fact that one can uncover disentangled representations from BERT models by masking weights/activations is a nice result and I'm not aware of similar approaches for BERT. However, it's unclear from the paper that this approach outperforms previous alternatives:\\n* First, the abstract mentions that the approach is the same or better than variational auto-encoder approaches, and I don't see it mentioned elsewhere in the main text. Am I missing something?\\n* Second, the paper does not show improved results on any benchmarks.\\nAs a result, I'm not sure whether the paper will be impactful enough for the community.\\n\\n**Other Questions**:\\n\\n* The proposed approach involves training binary masks rather than fine-tuning the BERT weights. Given that the mask has the same shape as the weights, it's unclear whether this is a major speedup. Could you discuss this more?\\n* Does using the pretrained model (vs. one trained from scratch) help?\\n* Have you considered masking a subset of the weights/activations (e.g. only in the last layer)?\\n* Do you have any intuition about the learned masks? E.g. are most weights/activations being removed? How much overlap is there between the masks learned for each attribute?\\n\\nOverall, I like this research direction, but I think it requires more work to be accepted.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official Review - AnonReviewer2\", \"review\": \"The paper presents a way to learn disentangled representations with respect to target attributes of interest by learning to mask weights or activations. A particular piece of text is encoded into distinct vectors that capture different factors of variation in the data. The method involves learning masks for each factor of variation while keeping the pre-trained model parameters fixed. The masks for every layer are trained using a combination of a triplet-loss, attribute classification loss, and one that encourages masks for different factors to be different across all layers. The triplet loss forces representations of examples that are similar with respect to a particular attribute to be closer than one that are similar based on another attribute.\\n\\nModels are evaluated on a sentiment/genre classification on a dataset sampled in such a way that introduces spurious correlations between genre and sentiment but evaluated on data that does not have any such correlation. The approach is also evaluated on disentangling syntax and semantics.\\n\\nStrengths\\n\\nBuilding models that are robust to spurious correlations in data is important for a variety of reasons and learning disentangled representations is a promising way to achieve that. This paper shows good generalization performance on datasets with such characteristics.\\n\\nThe overall approach is simple and only requires training masks over weights/activations at each layer. The masks are trained with a fairly straightforward choice of training objectives.\\n\\nThe paper is well written and the overall approach is easy to understand.\\n\\nWeaknesses\\n\\nThe triplet loss as formulated in this work seems to make it possible to disentangle only two factors of variation (a) and (b).\\n\\nThere is still a fair amount of attribute leakage and the probe designed to measure this leak is only a single layer MLP, there might be more leakage with stronger probes.\\n\\nThe weight masking strategy significantly increases the number of parameters (although the masks are binary, so it just requires a single bit as opposed to 16/32 bit floating point numbers). In this particular work, the number of parameters triples, and it scales linearly with the number of attributes as well.\\n\\nIt requires running the model forward multiple times to get representations that encode different factors of variation.\\n\\n\\nQuestions & Comments\\n\\nWhat would performance look like if masks were trained after fine-tuning on sentiment/genre classification? Rather than training masks directly on top of BERT-base. It would be interesting to see if the model is stable to recover from fine-tuning on data with spurious correlations and still produce disentangled representations.\\n\\nIs every single weight/activation masked at every transformer layer? The paper seems to lack some specifics about exactly what layers/weights are masked. Along these lines, did you experiment with masking only the last few layers? This could save time & parameters\\n\\nIn Figure 3 is the model training with L_{cls} corresponding to sentiment and then visualized for sentiment and genre? Or is the top trained with the supervised sentiment loss and the bottom for supervised genre loss?\\n\\nIt would be interesting to explore an L1 penalty on the masks for increasing sparsity, possibly in conjunction with magnitude pruning as well.\\n\\nThe WC task doesn't feel very representative of sentence \\\"semantics\\\"\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Light-weight approach to untangle language model representations\", \"review\": \"This paper proposes a masking strategy to identify subnetworks within language models responsible for predicting different text features. This approach requires no fine-tuning of model parameters and still achieves better results compared to previous approaches. Their experimental results on the movie domain show some level of disentanglement is achieved between sentiment and genre. Disentanglement capabilities of their model between sentence semantics and structure, are also tested on four tasks.\", \"pros\": [\"Paper is well-written and the idea is explained well.\", \"Experiment results are convincing and support the claims.\", \"Achieving comparable results to SOTA without the need to train or finetune models is interesting especially from a computational point of view.\"], \"cons\": [\"I wish the authors performed their first experiment on more domains: books, music, etc. and consider more than two labels.\", \"From current results, it's hard to confidently conclude that this approach is generalizable.\", \"Judging based on Figure 4 results I'm not convinced that the proposed approach does better than the *finetuned* (which I believe has a trained classifier on top of BERT) approach especially for Semantic tasks. Perhaps a discussion/ error analysis would be appropriate given better results on Syntax tasks.\", \"Also a discussion on the results for masking weights vs. masking hidden units is missing. If I'm not mistaken, mathematically, hidden unit masking is a subset of weight masking, where masking an item in hidden activation is equivalent to masking an entire column in the weight matrix?\"], \"comments\": [\"Although the idea of masking model parameters to achieve untanglment is new, there has been [previous work](https://www.aclweb.org/anthology/P18-1069.pdf) on using dropout to identify sub-parts of the network that contribute more/ less to model predictions framed as a confidence modeling task. Authors may consider adding it to related work.\", \"Another missed citation under related work is [HUBERT](https://arxiv.org/pdf/1910.12647.pdf) which examines untanglement of semantics and structure across a wide range of NLP tasks.\"], \"minor_typos\": [\"\\\"we *measure* evaluate them on four tasks ...\\\" on page 7\", \"\\\"Technically, in the Pruned + Masked Weights method, *the* refining the masks ...\\\"\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"an interesting work on disentangling representations of text\", \"review\": \"The paper proposes a problem of disentangling representations generated in pretraining models, such as BERT. That is, it is possible to learn disentangled representations that encode distinct, complementary aspect representations. To this end, the authors proposes a method that employs the mask technique on transformer weights or hidden units to find the subset of features correlating with a specific task. The experimental results show that the proposed method can encode particular aspects while weakly encoding others. The main contributions of the paper is the introduction of binary masks to identifying some subnetworks, which may correlate with specific tasks, within pretrained models. Overall, the paper is well written and is easy to follow.\", \"concerns\": \"1. The experimental setup is not convincing. The authors just consider movie reviews corresponding to Drama and Horror from IMDB and exclude reviews corresponding to other genres. It is obvious that considering only two genres is not convincing and more genres should be considered in the experiments. So, the authors should answer the following questions: (1) Why do the authors just selected these two specific genres to conduct the experiments? (2) Do the authors conduct similar experiments on other genres and what about the experimental results?\\n2. Figure 3 does not show that proposed method achieves better results than do the two baselines. In fact, the finetuned baseline performs very well according to Figure 3. I suggest that the author adopts some quantitative measures to accurately reflect the differences.\\n3. In addition, how to use these disentangling representations in downstream tasks, such as text classification, natural language inference, and semantic similarity? It is better to discuss and conduct experiment to show the advantages of their disentangling representations in downstream tasks.\", \"minor_comments\": \"1. In Formula (9), the parentheses are redundant.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
9MdLwggYa02 | ROMUL: Scale Adaptative Population Based Training | [
"Daniel HAZIZA",
"Jérémy Rapin",
"Gabriel Synnaeve"
] | In most pragmatic settings, data augmentation and regularization are essential, and require hyperparameter search.
Population based training (PBT) is an effective tool for efficiently finding them as well as schedules over hyperparameters.
In this paper, we compare existing PBT algorithms and contribute a new one: ROMUL, for RObust MULtistep search, which adapts its stepsize over the course of training.
We report competitive results with standard models on CIFAR (image classification) as well as Penn Tree Bank (language modeling), which both depend on heavy regularization.
We also open-source hoptim, a PBT library agnostic to the training framework, which is simple to use, reentrant, and provides good defaults with ROMUL. | [
"hyperparameter search",
"population based training",
"differential evolution",
"hyperparameter optimization",
"online optimization",
"deep learning"
] | Reject | https://openreview.net/pdf?id=9MdLwggYa02 | https://openreview.net/forum?id=9MdLwggYa02 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"RVatYJBi8b",
"8KmuGtG_zT7",
"7Jc63UcuWay",
"Qn0Jrxqgae",
"PxU_UC-3O3H",
"KvktTAWX9zV",
"XO8a9tf_Mtr",
"roOcGPNDSyG",
"VgUkTf0kTTT",
"ms9ZpSTvUz",
"K4g1caSfj_r",
"dcE-J8OQYiq",
"2QSZC-kXVHt"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040462961,
1606070278189,
1606065960445,
1605893500578,
1605893420734,
1605893390594,
1605893037341,
1605892849461,
1605892740512,
1604017289493,
1603886562228,
1603716409723,
1602751695125
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3628/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3628/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3628/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3628/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3628/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3628/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3628/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3628/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3628/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3628/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3628/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3628/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This submission proposes a variant of population based training (PBT) for hyperparameter selection/evolution, aimed at addressing drawbacks of existing variants (e.g. the coupling of the choice of checkpoint with the choice of hyperparameters). Reviewers generally agreed that the paper is interesting and covers an important topic, and the evaluation does show improvements over existing PBT variants. On the other hand they also raised a few important issues:\\n\\n1. The `hoptim` library is claimed as a primary contribution of the work, but it is not clear from the manuscript what benefits this library offers over existing software. When claiming a library as a main contribution, it is helpful to provide a more thorough description of the software and its benefits, and/or ideally a link (anonymized for review) to the software. The authors did respond by providing a brief description of the benefits of the library, mitigating this issue somewhat. However it's still difficult to discern how/whether to weigh the open source library as a main contribution of the paper.\\n\\n2. The evaluation is not very convincing: the differences are small and error margins are not provided for the neural network-based experiments, meaning that any differences could be due to noise. The authors fairly point out that it is difficult to perform multiple runs of these experiments as the resource requirements are large, and they have done 20 runs of the Rosenbrock experiment with smaller compute requirements. But the reviewers were not convinced that the Rosenbrock experiment reflects the method's application to neural network hyperparameter selection; the problems are too different. The submission would be significantly stronger if it included results over multiple runs of an \\\"intermediate\\\" sized experiment on a problem involving a neural network demonstrating that ROMUL outperforms competing approaches by a statistically significant margin.\\n\\n3. The proposed approach is ultimately heuristic. This is not necessarily a problem if there are strong empirical results demonstrating the efficacy of the proposed heuristic, but in this case the empirical results didn't convince (see point 2).\\n\\nGiven these concerns raised by reviewers, the submission is not quite ready for ICLR. I hope the authors will consider resubmitting the paper after improving it based on the reviewers' feedback.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thanks for your reply and for clarifying some of the points that were unclear in the paper.\\n\\nI appreciate the effort in running more seeds for some of the experiments and providing results for PBT with multiplicative steps. However, I agree with AnonReviewer1 in the fact that while results on the Rosenbrock benchmark might make a good motivating example, it is not enough to prove that the proposed method outperforms existing population-based approaches when it comes to optimizing neural networks. The paper would benefit from statistically significant results on problems involving neural networks, which do not need to be as large scale as the ones in this submission. In that case, it would be fine to provide single seed results for the large scale experiments reported in this submission.\\n\\nI believe that running such experiments is out of the scope of the rebuttal phase and will require major changes to the paper. For this reason, I will keep my original rating. As I wrote in my review, I believe this paper studies some important research directions and I would suggest to include the proposed changes in a future submission.\"}",
"{\"title\": \"Suggestions to improve the paper...\", \"comment\": \"Hi - I am just writing to say that I have read the comments and will not be changing my score. In saying that, there are elements of the work which seem promising so below are my two main suggestions to improve it for future submission (I suspect it will take >> 1 week).\\n\\nThe experiments are not at all convincing. This does not mean they need to be *larger*. In fact, it would be great if there was an intermediate experiment, where PBT does well, such as a small RL task. The Rosenbrock is good to motivate/explain but it is not enough to convince, and so you need something slightly larger which can produce robust and intuitive results. Then after that, you can include the larger parts, but I don't trust results with one seed so alone would have to discount those. \\n\\nI am also not convinced that the library should be introduced as a contribution, unless there is a strong empirical reason why it is better than ray tune, who clearly have a well-functioning PBT algorithm (I don't work for Ray). I think your paper would be better if you focus more on the findings regarding why PBT fails and how your proposed solution solves this problem, rather than confounding it by discussing a library which is very hard to evaluate. The presence of the open source code is of course welcome, but as a supplement.\"}",
"{\"title\": \"Thank you for productive feedback\", \"comment\": \"Thanks to the reviewers for productive feedback. We added some experiments on Rosenbrock to show the behavior of the various PBT algorithms:\\n1. in statistical robustness across several runs,\\n2. For more hyperparameters (larger number of variables optimized by PBT).\"}",
"{\"title\": \"review answer (2/2)\", \"comment\": \"> The library is presented as a second major contribution, but it is not clear why the reader would choose to use it over existing libraries such as ray tune, which are popular and widely used. There is no comparison or discussion here, other than just saying that the new library is better. I also couldn\\u2019t find the library anywhere, the supplementary material is just a two page pdf, and there is no anonymized link. Please correct me if I missed this.\\n\\n\\nThe package is indeed not yet open sourced but will be when we make the article public.\", \"compared_to_ray_tune\": \"- Hoptim is simpler, standalone for PBT, independent of any scheduler, while Ray Tune depends on Ray (API for distributed training, with a central server).\\n - Experiments can be resumed and agents can be added to the population.\\n - Hoptim has multiple optimization benchmarks and training use cases that come with the library.\\n\\nOverall, we have had issues with ray on a slurm cluster and wanted to keep full flexibility with hoptim both in term of research (being able to master every piece of it) and of scheduling (split the workers through several jobs, adding/removing workers, handling preemptions). The current design is completely decoupled from the scheduling part, it does not assume a slurm cluster but does require a shared file system. Eventually, we expect that ROMUL can be easily ported to Ray Tune anyway.\\n\\n> The ICLR 2020 template was used (rather than 2021).\\n> Bottom of page 7, \\u201craw\\u201d -> \\u201crow\\u201d\\n\\nThank you for noticing, this is updated.\"}",
"{\"title\": \"review answer (1/2)\", \"comment\": \"Thank you for your review\\n> The method is based on heuristics and the experiments are unfortunately not rigorous: the gains are small and it is a single seed. To increase my score, I would need to see more robust results that make these heuristics convincing, for example multiple seeds with clear outperformance (ideally statistically significant). \\n\\n> Experiments are only run a single time, and this is surely a noisy process. Given that, the gains vs. PBT seem small. It is entirely possible that this small gain is reversed in a second run. If the TransformerXL is too expensive, then a smaller experiment which can be repeated multiple times would be a stronger piece of evidence for the method\\u2019s efficacy.\\n\\nWe understand the concern about significance of the results, and it\\u2019s an important point when evaluating methods of hyperparameter tuning. In practice, our experiments have dozens of workers in parallel, and it\\u2019s been challenging to provide sensitivity analysis on the neural network trainings given the significant resource requirements (24 GPUs x 12 hours for one experiment on average). Note however that on PTB we provide 2 runs (one with 16 workers and the other with 32) with consistent behaviors, even though it is definitely not enough for statistically significant evidence. \\nTo partially overcome this lack of statistical evidence, we repeated the Rosenbrock benchmark on 20 runs, and performed a t-test on the results to assess the signicativity of the difference. The p-value was below 1e-4 highlighting that ROMUL performs significantly better than other methods on this ill-conditioned problem. These results are now included in section 3.2 of the paper and in the Table 3 of the appendix.\\n\\n> The main contribution of the work is not convincing. It simply replaces one heuristic for another. While the results show an improvement, it is not clear. It would also be important to see ablation studies for the newly introduced parameters (e.g. m). In addition, some demonstration of the phenomena described having an influence on the performance would be helpful. The authors claim to reduce the meta-parameters, yet introduce new parameters (F_1, F_2 and m).\\n\\nNeither F1, nor F2 nor m are modified throughout all of the experiments, so they should not be considered as parameters but as constants which a user does not have to care about. Users only need to provide bounds for the parameters, and the scaling of the updates automatically adapts during the course of the training. This is a major difference compared to other optimizers, for which mutation step sizes need to be carefully provided. Indeed, this mutation step size has a major impact on the optimization as can be seen through the Rosenbrock benchmark: different step sizes in Initiator PBT lead to an order of magnitude improvement (see the newly added table 3 in the Appendix). When such scaling is unknown, or if there are many hyperparameters, tuning it manually is in our opinion the major difficulty of current PBT algorithms since good parametrization on one application cannot be straightforwardly transferred to another application. While ROMUL does not always reach the performance of hand-tuned PBT algorithms for specific applications, we aimed at showing it was good on a broad range of settings, without manual tuning.\\n\\n> Also how was the size of the PBT step chosen? For the transformer experiment it goes from 1 epoch -> 10 epochs. Some ablation studies for these parameters would be needed for a reader to fully understand how to use this method on a new task.\\n\\nWe have indeed limited our experiments on trainings with around 100 to 300 epochs, and 1 step per epoch to obtain somehow similar trainings. This is now explicitly mentioned in the beginning of the experiment section (Section 3). The experiment with 10 epochs on PTB was only to highlight and try to understand the difficulties which arose during the training.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for your review\\n\\n> While the authors highlight the hoptim library as one of the main contributions of the paper, they do not describe what are their advantages over existing hyperparameter optimization frameworks or existing PBT implementations (e.g. the one in Ray). As far as I can tell, there is no source code or link to an anonymous repository so that we can evaluate this contribution.\\n\\nThe package is indeed not yet open sourced but will be when we make the article public.\", \"compared_to_ray_tune\": \"- Hoptim is simpler, standalone for PBT, independent of any scheduler, while Ray Tune depends on Ray (API for distributed training, with a central server).\\n - Experiments can be resumed and agents can be added to the population.\\n - Hoptim has multiple optimization benchmarks and training use cases that come with the library.\\n\\nOverall, we have had issues with ray on a slurm cluster and wanted to keep full flexibility with hoptim both in term of research (being able to master every piece of it) and of scheduling (split the workers through several jobs, adding/removing workers, handling preemptions). The current design is completely decoupled from the scheduling part, it does not assume a slurm cluster but does require a shared file system. Eventually, we expect that ROMUL can be easily ported to Ray Tune anyway.\\n\\n> Results for baseline methods are slightly worse than those in the literature even when using the code provided by the authors, and authors are encouraged to explain the reason for this. \\n\\nThe official github repo (https://github.com/kimiyoung/transformer-xl) does not contain the script for training with Pytorch on PTB. We used the script mentioned in this thread https://twitter.com/ZihangDai/status/1245905407350112256 by the first author of Transformer-XL, which is in github: zihangdai.github.io/misc/ptb.zip. The example_log.txt file in this zip states provides \\u201ctest ppl 55.60\\u201d at the end of the training, which we do replicate (we obtain 55.43).\\n> ROMUL seems to outperform other PBT methods when tuning TransformerXL, but the benefits on PBA when applied to CIFAR-10 are not so clear. It is difficult to evaluate the significance of these figures, as no standard deviation across seeds is reported.\\n\\nWe understand the concern about significance of the results, and it\\u2019s an important point when evaluating methods of hyperparameter tuning. In practice, our experiments have dozens of workers in parallel, and it\\u2019s been challenging to provide sensitivity analysis on the neural network trainings given the significant resource requirements (24 GPUs x 12 hours for one experiment on average). However, we repeated the Rosenbrock benchmark on 20 runs, and performed a two sample Welch t-test on the results to assess the signicativity of the difference. The p-value was 1.4e-5, meaning that ROMUL performs statistically significantly better than other methods on this ill-conditioned problem. These results are now included in section 3.2 of the paper and in the Table 3 of the appendix.\\n\\n> My main concern regarding the experimental setup has to do with the mutation constant used for Initiator PBT and Truncation PBT. The works by Li et al. (2019) and Jaderberg et al. (2017) perturb hyperparameters by a multiplicative factor of 1.2 or 0.8 instead. This enables a much finer-grained search space than the one implemented in this submission, where the additive mutation constant might be too large for some parameters given the range in which these parameters are defined.\\n\\nThis is true that such multiplicative factor allow reaching values arbitrarily close to the optimal ones, however, this is still lacking because:\\nwhile the steps can \\u201creach\\u201d closer values, it does not mean they \\u201cconverge\\u201d to this values, since they will still have \\u201cclose to\\u201d fixed steps around the optimal value.\\nSuch multiplicative steps assume a logarithmic dynamic, which may be inaccurate, hence large values would always have larger step than small values. For dropouts for instance, this does not seem accurate since we would probably need smaller steps on both low and high dropouts.\\nThese values may still need to be adapted for each specific application, which is impractical at scale.\\nOverall, this comes back to the adaptability requirement that has led to ROMUL: without step adaptation mechanism, it is not possible to converge to optimal solutions.\\nIn any case, we have rerun the Rosenbrock benchmark with multiplicative steps for Initiator PBT, reaching results in between the big steps and the small steps, leaving still an order of magnitude gap compared with ROMUL. This has been added in Table 3 in the Appendix.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for your review\\n\\n> The major part of the proposed method ROMUL is to replace some update rules based on Differential Evolution, which is a well-studied method in Evolutionary Algorithms. The novelty of the proposed method ROMUL is not high. But more important, it is unclear why such modifications are necessary. In other words, what new challenges in HPO can be addressed by conducting these modifications to PBT. Without clear and strong reasons to motivate these modifications, it is hard to evaluate the proposed method.\\n\\nThe Differential Evolution (DE) approach is indeed a well studied method for black box optimization, which works pretty well in the context of finding optimal hyperparameters. It\\u2019s based on a population of individuals, but is not a \\u201cPopulation Based Training\\u201d (PBT) algorithm, as we define it in the context of our paper, which solves a different problem.\\nIn (standard) DE, we train N models to convergence, report their loss, train N other models suggested by DE, and repeat the process T times. Because several iterations are required, DE does not converge before `T` times the wall clock time of a single training at best. Romul does not follow this scheme.\\nIn PBT approaches (including our approach Romul adapted from DE), we evaluate the models after they have done N/T of their training. Thus, we aim at finding a good model with good hyperparameters within the wall clock time used to train a single model. Our approach also finds schedules over hyperparameters, rather than fixed values, which are shown to improve models performance.\\nFitting a restricted compute budget, and the need for schedules over some hyperparameters (eg data augmentation) are key reasons why practitioners use PBT approaches (e.g. Romul) rather than regular hyperparameter optimization methods (e.g. DE).\\n\\nOverall we consider it is still a variant of DE, but explain the adaptations required to make it work in the context of population based training. As shown in section 2.2, the differences include adding the concept of checkpoint, since this is a specificity of population based training compared to standard derivative-free optimization problems, and handling a dynamic function which strongly biases pairwise comparisons with parents in DE.\\n\\n> Personally I feel that \\\"fixed step\\\" issue in PBT is important, which should be mentioned early.\\n\\nThis is indeed one of the most important feature of ROMUL compared to current PBT methods. While this was mentioned in the abstract, it was indeed not explicitly stated in the introduction. We added a couple of mention of it.\"}",
"{\"title\": \"Thank you for your review\", \"comment\": \"Thank you for your review.\\n\\n> Significance: the improvements over existing methods seem slight. The experiments do not provide sensitivity analysis so it is a bit hard to conclude whether the results are statistically significant. But at the same time, the proposed method does show promise.\\n\\nWe understand the concern about significance of the results, and it\\u2019s an important point when evaluating methods of hyperparameter tuning. In practice, our experiments have dozens of workers in parallel, and it\\u2019s been challenging to provide sensitivity analysis on the neural network trainings given the significant resource requirements (24 GPUs x 12 hours for one experiment on average). However, we repeated the Rosenbrock benchmark on 20 runs, and performed a two sample Welch t-test on the results to assess the signicativity of the difference. The p-value was below 1e-4 highlighting that ROMUL performs significantly better than other methods on this ill-conditioned problem. These results are now included in section 3.2 of the paper and in the Table 3 of the appendix.\\n\\n> As a thorough evaluation purpose, it would be interesting to see how the proposed methods work in large set of hyperparameters (magnitude of 10-100).\\n\\nFollowing the same reasoning as above, we can experiment on multiple Rosenbrock, with the drawback that, again, this may not be representative of actual trainings. Interestingly, ROMUL still performs best or second best over other optimizers up to around 30 parameters. This will need to be investigated further in future works.\\n\\n> PBT needs to use validation loss to obtain fitness. Is your result evaluated on the validation data or the test data? If only evaluating on the validation data, the result may not reveal potential overfitting to the validation set. So it would be nice to have results on a held-out test set.\", \"we_want_to_clarify_this\": \"we train on the training set, and validate on the validation set. This validation loss is reported to the PBT algorithm. At the end of the training, we evaluate the models on the test set, and report the results on our paper (See Tab. 2 containing only test set results, and Tab. 3 providing both validation and test set results).\\nThe overfitting issue is however still an interesting point, and indeed in our early experiments we found out that PBT algorithms could all significatively overfit the validation set (some more than others, mainly depending on how \\u201cgreedy\\u201d they are in selecting good checkpoints although we did not back this observation with experiments, cf Section 4.2).\"}",
"{\"title\": \"Good paper\", \"review\": \"#### Summary\\nThe paper provides a new variant of PBT which utilizes ideas from differential evolution and cross-over. The original PBT and even initiator PBT do not perform crossover on the hyper-parameters, and insufficient cross-over may cause PBT to perform greedy in the initial phases which ends up with a suboptimal convergence. The investigation of better cross-over in PBT is itself an interesting research direction and the authors demonstrated its effectiveness in standard benchmarks and data augmentation tasks. The improvements of ROMUL-PBT are also helpful to the community since PBT has been applied in a variety of real world applications.\\n\\n#### Pros\\n1. Quality: The paper quality is in general good. The experiments are well designed and the results are good. So the experiments clearly supports the argument that differential evolution helps PBT.\\n2. Clarity: The paper is well written and easy to follow. The organization is also clear.\\n3. Originality: I think that adapting ideas from differential evolution to PBT is new, even though differential evolution itself is not something new.\\n4. The paper provides some benchmarking of PBT related algorithms in image classification, language modeling and data augmentation which is good for the community to understand these approaches.\\n\\n\\n#### Cons\\n1. Significance: the improvements over existing methods seem slight. The experiments do not provide sensitivity analysis so it is a bit hard to conclude whether the results are statistically significant. But at the same time, the proposed method does show promise.\\n2. As a thorough evaluation purpose, it would be interesting to see how the proposed methods work in large set of hyperparameters (magnitude of 10-100). \\n\\n#### Questions\\n1. PBT needs to use validation loss to obtain fitness. Is your result evaluated on the validation data or the test data? If only evaluating on the validation data, the result may not reveal potential overfitting to the validation set. So it would be nice to have results on a held-out test set.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Motivation of the proposed method is not clear and the writing can be further improved\", \"review\": \"In this submission, the authors propose a modification to the PBT (population-based training) method for HPO. It is interesting, however, there are several important issues to consider:\\n\\n1) The major part of the proposed method ROMUL is to replace some update rules based on Differential Evolution, which is a well-studied method in Evolutionary Algorithms. The novelty of the proposed method ROMUL is not high. But more important, it is unclear why such modifications are necessary. In other words, what new challenges in HPO can be addressed by conducting these modifications to PBT. Without clear and strong reasons to motivate these modifications, it is hard to evaluate the proposed method.\\n\\n2) The writing of this submission can be further improved. Many paragraphs and sentences are not logically organized, and it is difficult to understand the main points of the submission. For example, based on the Introduction section, it seems that the main part of this submission is to \\\"empirically study the different training dynamics of ...\\\" (second paragraph). And in introduction, the authors didn't well motivate the proposal of their method. Although several challenges are mentioned in the first paragraph, it is not clear which ones are solved by the proposed method, and how they are tackled.\\n\\nPersonally I feel that \\\"fixed step\\\" issue in PBT is important, which should be mentioned early.\\n\\nSeveral interesting findings are provided in the experiments. The authors can make them clear and highlight them by improving the writing of current version.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Review\", \"review\": \"This submission studies Population Based Training (PBT) methods for tuning and adapting hyperparameters over the course of training. It makes two contributions: (1) a novel PBT algorithm, ROMUL, and (2) a library for PBT-based training, hoptim.\\n\\nAfter reading the paper and the supplementary material, I can only comment on the first contribution. While the authors highlight the hoptim library as one of the main contributions of the paper, they do not describe what are their advantages over existing hyperparameter optimization frameworks or existing PBT implementations (e.g. the one in Ray). As far as I can tell, there is no source code or link to an anonymous repository so that we can evaluate this contribution. Commands for replicating experiments using hoptim are scattered throughout the manuscript, but please note that this is not enough to evaluate the quality or impact of the software. This is an important issue because the paper justifies weaker empirical results based on implementation differences that are never discussed (e.g. \\u201cThe differences in the job and population management in hoptim may explain the difference between our implementation and theirs, which is particularly marked on the training set reduced CIFAR-10: 12.8% for their vs. 13.9% for our implementation.\\u201d, or how the number of workers has a strong impact in the final result for PTB experiments.).\\n\\nBy leveraging ideas from Differential Evolution, ROMUL eases the task of defining the search space when considering hyperparameters with different magnitudes. In other words, this simplifies the task of \\u201ctuning the hyperparameter tuner\\u201d. The benefits of the proposed strategy are showcased by optimizing a 2D Rosenbrock function where the optimal values for the two parameters differ in magnitude (a=1, b=100). ROMUL is then applied to optimize Population Based Augmentation (PBA) on a reduced training set for CIFAR-10 and to tune the dropout rates in TransformerXL for Penn Treebank (PTB). Results for baseline methods are slightly worse than those in the literature even when using the code provided by the authors, and authors are encouraged to explain the reason for this. ROMUL seems to outperform other PBT methods when tuning TransformerXL, but the benefits on PBA when applied to CIFAR-10 are not so clear. It is difficult to evaluate the significance of these figures, as no standard deviation across seeds is reported. \\n\\nMy main concern regarding the experimental setup has to do with the mutation constant used for Initiator PBT and Truncation PBT. The works by Li et al. (2019) and Jaderberg et al. (2017) perturb hyperparameters by a multiplicative factor of 1.2 or 0.8 instead. This enables a much finer-grained search space than the one implemented in this submission, where the additive mutation constant might be too large for some parameters given the range in which these parameters are defined. For instance, the optimal value of $a$ in the Rosenbrock function is 1 but the step size for Initiator PBT is $(hi-lo)/30=224.24/30=7.47$. Since $\\\\hat{a}$ is initialized to 20, it is impossible for this method to even get close to the optimal value.\\n\\nThe discussion section provides some interesting experiments showcasing the effect of some design choices on PBT methods. It discusses the importance of the patience of the algorithm in order to account for the long-term impact of some hyperparameters (e.g. learning rate, dropout rate) as well as the impact of reusing existing checkpoints after mutation.\\n\\nWhile this submission discusses important research topics, I do not believe it is ready for publication yet. The authors highlighted two main contributions, but I believe there are three potential ones: (1) a PBT method that does not need extensive tuning, (2) a software library for PBT training, and (3) an empirical evaluation of different design choices for PBT methods. However, these need to be developed further (and potentially in separate papers) before they can be published at ICLR.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting problem, unconvincing solution.\", \"review\": \"** Summary **\\n\\nThis paper focuses on issues in the popular PBT algorithm for hyperparameter optimization. It investigates the 1) step size (which is typically a constant multiplier) 2) the variance induced by better weights and 3) the greediness of the algorithm, which they refer to as short-term vs. long term effects. These issues are well motivated, and it is intuitive that they are flaws in the original algorithm. The proposed approach is to use Differential Evolution which the authors claim makes the hyperparameter selection more robust. The paper also introduces a new library for online hyperparameter tuning.\\n\\n** Primary Reason for Score **\\n\\nThe strengths of this work are that it identifies and discusses some interesting issues with PBT, a commonly used algorithm. However, as someone who frequently uses variants of the PBT algorithm, the evidence provided in this work is not sufficient for me to adopt their recommendations. The method is based on heuristics and the experiments are unfortunately not rigorous: the gains are small and it is a single seed. To increase my score, I would need to see more robust results that make these heuristics convincing, for example multiple seeds with clear outperformance (ideally statistically significant). It would also be important to see ablation studies for the newly introduced parameters (e.g. m). In addition, some demonstration of the phenomena described having an influence on the performance would be helpful.\\n\\n** Strengths **\\n\\n1) The issues the paper addresses are well motivated, and well described. \\n2) The topic of the paper (PBT) is one that I think has not been sufficiently addressed by the community. In particular, the present PBT algorithm is commonly used but none of the improvements since 2017 have been widely adopted. It seems like a fruitful direction for research.\\n3) I appreciate the discussion of the results, which do not claim SoTA but instead go into detail on possible drivers of performance. \\n\\n** Weaknesses **\\n\\n1) The main contribution of the work is not convincing. It simply replaces one heuristic for another. While the results show an improvement, it is not clear. \\n2) Experiments are only run a single time, and this is surely a noisy process. Given that, the gains vs. PBT seem small. It is entirely possible that this small gain is reversed in a second run. If the TransformerXL is too expensive, then a smaller experiment which can be repeated multiple times would be a stronger piece of evidence for the method\\u2019s efficacy. \\n3) The authors claim to reduce the meta-parameters, yet introduce new parameters (F_1, F_2 and m). Also how was the size of the PBT step chosen? For the transformer experiment it goes from 1 epoch -> 10 epochs. Some ablation studies for these parameters would be needed for a reader to fully understand how to use this method on a new task. \\n4) The library is presented as a second major contribution, but it is not clear why the reader would choose to use it over existing libraries such as ray tune, which are popular and widely used. There is no comparison or discussion here, other than just saying that the new library is better. I also couldn\\u2019t find the library anywhere, the supplementary material is just a two page pdf, and there is no anonymized link. Please correct me if I missed this. \\n\\n** Minor issues **\\n\\ni) The ICLR 2020 template was used (rather than 2021). \\n\\nii) Bottom of page 7, \\u201craw\\u201d -> \\u201crow\\u201d\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
dluhjOg0qKn | Deep Ensembles for Low-Data Transfer Learning | [
"Basil Mustafa",
"Carlos Riquelme Ruiz",
"Joan Puigcerver",
"André Susano Pinto",
"Daniel Keysers",
"Neil Houlsby"
] | In the low-data regime, it is difficult to train good supervised models from scratch.
Instead practitioners turn to pre-trained models, leveraging transfer learning. Ensembling is an empirically and theoretically appealing way to construct powerful predictive models, but the predominant approach of training multiple deep networks with different random initialisations collides with the need for transfer via pre-trained weights. In this work, we study different ways of creating ensembles from pre-trained models. We show that the nature of pre-training itself is a performant source of diversity, and propose a practical algorithm that efficiently identifies a subset of pre-trained models for any downstream dataset. The approach is simple: Use nearest-neighbour accuracy to rank pre-trained models, fine-tune the best ones with a small hyperparameter sweep, and greedily construct an ensemble to minimise validation cross-entropy. When evaluated together with strong baselines on 19 different downstream tasks (the Visual Task Adaptation Benchmark), this achieves state-of-the-art performance at a much lower inference budget, even when selecting from over 2,000 pre-trained models. We also assess our ensembles on ImageNet variants and show improved robustness to distribution shift. | [
"transfer learning",
"representation learning",
"computer vision",
"ensembles"
] | Reject | https://openreview.net/pdf?id=dluhjOg0qKn | https://openreview.net/forum?id=dluhjOg0qKn | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"UrLHrV84mz1",
"CZXQHVDlGj2",
"GTz9qP_UGMF",
"ZPIJPYlxpT",
"Per1wOXRrIt",
"_lKBjNTlPJs",
"WZdsAdbDuSs",
"m4ZmWnuGnpk",
"drXRBBkGeKk"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040512812,
1605608586154,
1605608070174,
1605607217248,
1605606479000,
1604208551499,
1603943856514,
1603790199153,
1603711208449
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3626/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3626/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3626/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3626/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3626/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3626/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3626/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3626/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"All reviewers recommend that the paper be rejected. The reviewers appreciate the line of research and is worthwhile, but find that the paper lacks in technical novelty and insight. The AC is in consensus with their reviews due to the concerns raised regarding novelty and insight and recommends rejection.\"}",
"{\"title\": \"Reply to review by AnonReviewer4\", \"comment\": \"We would like to thank the reviewer for the response - we are glad you recognise the importance of the work, and that the writing and experiments were of a sufficient standard!\", \"in_response_to_some_of_the_concerns\": [\"__Novelty__\", \"On the topic of novel technical contributions, we agree with the reviewer that the work proposes an algorithm that builds off multiple preceding works, specifically the use of model selection to discriminate between pretrained models and the greedy algorithm for constructing ensembles. However, we argue that combining disparate efforts in order to present a computationally feasible and performant approach to tackle a problem of high importance is itself valuable progress.\", \"We also believe that our work has many contributions aside from the proposed algorithm:\", \"We found very few related works studying ensembling of modern neural networks in this data regime. We believe that studying different approaches is itself a solid contribution. For example, just demonstrating that the combination of transfer learning from a single pretrained initialisation + downstream diversity (from augmentations, hyperparameters, etc) can yield good performance in this type of task is itself a valuable finding, as we do not know of other works that show this in this data regime.\", \"For example, we know of only one other work that studies [hyperparameter ensembles](https://arxiv.org/pdf/2006.13570.pdf); they do not use production-scale architectures, do not restrict the data to the low data regime, and do not assess models on such a wide array of new tasks; and there is no transfer learning involved.\", \"Ultimately, we concluded that - whether the source of variation was due to upstream pretraining on different datasets, or simply upstream pretraining with multiple random seeds - that sources of upstream diversity are more performant than sources of downstream diversity. We do not find any concurrent works that demonstrate the same conclusion.\", \"We believe that the improvements to robustness demonstrated by the evaluation on ImageNet variants is a particularly interesting and valuable insight that makes this conclusion even more convincing, departing from the tunnel-vision focus on top-1 accuracy.\", \"Furthermore, demonstrating the superiority of combining the two sources of diversity is also a unique finding, and proposing a simple heuristic for doing so in a computationally efficient and performant way is a novel technical contribution.\", \"__Why is it helpful - and what is the role of diversity?__\", \"Firstly, we think in hindsight that the term diversity was used very informally in this effort - it is tricky as there are multiple definitions and it isn\\u2019t something the field has agreed upon, but really when discussing upstream/downstream \\u2018diversity\\u2019, what we meant was sources of model variation as opposed to diversity in the sense of diversity of predictions/errors etc.\", \"That being said, we totally agree with the reviewer here - we read many papers on the topic of quantifying and understanding diversity and spent a lot of effort applying them to analyse our ensembles. Though we generally saw that the ensembles using upstream/combined diversity were more \\u2018diverse\\u2019, and that they were marginally more optimal on the diversity/accuracy tradeoff, we didn\\u2019t find any systematic and convincing trends, and didn\\u2019t wish to engage in data dredging till an attractive/presentable looking trend emerged. We also found this disappointing, as were hoping to find some clear insights to back up why the different sources of model variation help.\", \"The paper the reviewer linked is fascinating! It may explain the strong performance on the ImageNet variants. To our understanding, the role of diversity in ensemble performance still seems contested in literature, with many works proposing different diversity metrics and making contradictory claims about its importance; given this, and the fact that we already had a lot of strong results to present, we decided not to include it. We believe that in lieu of such analysis, thoroughly evaluating the different approaches from so many angles (19 diverse classification tasks for accuracy and 7 different variants of ImageNet for multiple robustness metrics) was a more useful contribution.\"]}",
"{\"title\": \"Reply to review by AnonReviewer3\", \"comment\": \"Many thanks for the comments; we hope we can address some of them here.\\n\\n1. We understand the reviewers concerns relating to novelty in terms of technique contributions; we combine a number of disparate techniques (in particular, KNN selection from Puigcerver et al. and greedy ensembling from Caruana et al.) in order to build the proposed algorithm.\", \"we_would_like_to_note_a_few_technical_contributions_however\": [\"Previous work did not consider selecting multiple models for finetuning on new tasks (we also stress-test the KNN, picking only 15 models from a pool of over 2002!). Indeed, as far as we are aware, previous work found that combining different experts models trained on different data is hard, and it did not work out of the box. From Puigcerver et al.: \\u201cSelecting and combining multiple experts for any downstream task is a natural extension of our work. This could be especially useful for tasks that require understanding several concepts, not necessarily captured by a single expert.\\u201d In this work, we found a way to make it work nicely.\", \"We propose a heuristic based on KNN accuracy (Section 2.3.1) which allows a performant balance between upstream and downstream diversity by automating not only which pretrained models to select, but how many to select for each downstream task.\", \"_However, aside from technical novelty, we believe there are a number of valuable contributions from this work:_\", \"To the best of our knowledge, we found no other works comparing or suggesting approaches to building ensembles of modern deep networks in the low data regime. We would like to re-iterate that there are only 1000 datapoints available per task downstream; for example, [concurrent work](https://openreview.net/pdf?id=_77KiX2VIEg) which developed ensembles in the low data regime achieved ~20% accuracy on CIFAR100, whereas our models achieve ~70+% accuracy (with only 10 data points per class). We further believe that our work is distinguished in considering production-scale models on a very diverse range of tasks, instead of experimenting with small-scale models on tasks such as MNIST. Furthermore, we find very little comparable work studying transfer learning and the low data regime in the context of ensembles.\", \"We note that even the performance of our baseline ensembles - which use \\u2018downstream\\u2019 diversity - are a contribution. Though we consider it a baseline, we know very few other works that study such ensembling techniques with such little data. Showing the competitiveness of hyperparameter or augmentation ensembles in this setting is in and of itself a contribution.\", \"Our main conclusion is that **upstream diversity is more useful than downstream diversity**; we know of no other concurrent or previous works that demonstrate this. Similarly, our demonstration of the superiority of combining both forms of diversity - and a proposed heuristic to do so efficiently - is also novel as far as we are aware.\", \"We believe that the results showing significantly increased distribution to domain shift, by assessing on the 7 ImageNet variants, are also a valuable contribution and a very interesting insight. Even if our models were not more accurate, we believe that showing such a significant boost on multiple robustness metrics is a very compelling result.\", \"2. Concerning how models are compared:\", \"We are not sure we fully follow the point relating to fairness of comparison. We chose to compare inference cost, as this was the clearest thing that was comparable across settings. We do not make any claims in relation to the total training time - though given that, for example, the R152x4 requires ~45x as many FLOPs for a forward pass as a single R50x1, we suspect they are at least in the same ballpark range for training cost.\", \"The core assumption of this work is that there is a number of available pre-trained upstream models, something which is increasingly true (see https://www.tensorflow.org/hub). In this setup, the downstream training cost of our ensemble methods is negligible, and most likely not the reason why we see the nice large gaps shown in Figure 2 a).\"]}",
"{\"title\": \"Reply to review by AnonReviewer2\", \"comment\": \"Many thanks for the clear review! We will correct some of the minor formatting errors.\", \"in_response_to_the_comments\": [\"1. Apologies for the lack of clarity in writing! We will definitely clear this up in the paper, but to clarify here:\", \"{Aug/Hyper} Ensembles are ensembles which utilise only downstream diversity (from augmentations/hyperparameter variation in the downstream finetuning).\", \"{Generalist/Expert} Ensembles are ensembles which utilise only upstream diversity (from pretraining models with multiple random seeds, or on different upstream datasets, respectively).\", \"{Aug/Hyper}{Generalists/Experts} combine both - e.g. HyperExperts utilise experts as an upstream source of diversity, combined with hyper parameterization as a downstream source of diversity. The proposed thresholding heuristic automatically decides the balance between the two forms of diversity.\", \"2. This is a good point! We would like to mention a few things:\", \"The assumption underpinning many works in Transfer Learning is that the pretraining cost is only incurred once, and therefore not considered in the finetuning phase. We therefore split computation costs in \\u2018upstream pretraining\\u2019 (ignored - analogous to practitioners downloading pretrained models from the web), \\u2018downstream finetuning\\u2019 (consistent between all our ensembles), and \\u2018inference\\u2019 (arguably the most important)\", \"Assumedly, one needs to separately compare finetuning cost vs. VTAB-1K score, and inference cost vs VTAB-1K score, as a model is only trained once but may be used many times.\", \"Inference cost: We predominately compared based on inference budget as this info is readily available (we can use FLOPS as a proxy, which is likely fair due to the similarity in architecture). We believe inference cost is the fairer comparison given the differences in setup, and also of more practical relevance, and hope that the benefits in this quarter are clear in the paper.\", \"Fine-tuning cost: Here it is harder to say; these numbers are not readily available to us for the baseline, but the single-SOTA models were also fine-tuned downstream with a hyper-parameter sweep.\", \"As a back of the envelope calculation, the best single model is a R152x4 which has ~44x flops for a forward pass vs a single R50x1. Our ensembles train 60 ResNet-50s downstream. Assuming a training pass scales linearly in compute time w.r.t. an inference pass (a very conservative assumption), this would perhaps put our ensembles at 1.36x the cost of fine-tuning once the strongest single model we compare against.\", \"We showed (Section 4.4) that our methods perform very well even with reduced fine-tuning budget; our ensembles that train 20 ResNet-50s downstream still beat the R152x4 baseline. Thus we suspect our ensembles compare favourably here too.\", \"3. As mentioned at the start of Section 4, all of our ensembles use the same finetuning/inference compute budget - Downstream, Upstream and Combined diversity are therefore on a fair playing field in this respect - we will adjust the text to make this clearer. The only difference lies in which (models+hyperparameters) we fine-tune in each case, but compute-wise they all get the same budget. One of the contributions of this work is to suggest a simple heuristic based on KNN accuracy which allows one to combine the two sources of diversity under a fixed finetuning budget.\", \"This again assumes the pretraining is \\u2018free\\u2019 - from a practical perspective, this is not an unreasonable assumption; there are many widely available pretrained checkpoints using a variety of architectures, pre-training methods, datasets etc.\", \"4. Interesting question!\", \"The default hyperparameter sweep (2 learning rates, 2 learning schedules) is used for ensembling approaches (augmentation, upstream diversity) which don\\u2019t include a hyperparameter sweep already (e.g. AugEnsembles, ExpertEnsembles). It is in fact the same hyperparameter sweep used in the original Visual Task Adaptation Benchmark paper. Concerning patterns between these hyperparameters and the wider sweep used to generate HyperEnsembles, we didn\\u2019t notice any systematic trends between the hyperparameters across the 19 tasks; it was highly task dependent, with per-task preferences relating to dropout, learning rate, schedule length etc varying significantly.\", \"The baseline numbers are copied from literature and were not replicated by us. They propose a hyperparameter heuristic which adaptively sets learning rates/schedule lengths/resolution/etc as a function of the downstream task\\u2019s properties (number of images, nature of the images and so on); downstream, they therefore only use one hyperparameter per task. However, this hyperparameter heuristic was defined after much experimentation on VTAB-1k, whereas we use the default suggested in the original Visual Task Adaptation Benchmark paper; therefore, it\\u2019s not simple to compare the ours to the baseline on this front.\"]}",
"{\"title\": \"Reply to review by AnonReviewer1\", \"comment\": [\"We thank the reviewer for the thorough response! We are glad you enjoyed the writing and found the explanation/experiments of a good standard, and hope we can address your comments below:\", \"About novelty of the work: We do believe this paper has many valuable insights, and will aim to make that clearer in the updated version.\", \"We would first like to note that as far as we could find, there were very few research efforts focussed on ensembles of modern neural networks in the low data regime - we reiterate that on downstream tasks, we only have 1000 data points available. For tasks such as CIFAR100 or CalTech101 this means only 10 data points per class. We believe it is a solid contribution to demonstrate the efficacy of different ensembling methods (both utilising \\u2018downstream\\u2019 sources of diversity and \\u2018upstream\\u2019 sources of diversity). For context, [recent work](https://openreview.net/pdf?id=_77KiX2VIEg) studies ensembling in a similar regime - their results on CIFAR100 (10-shot per class i.e. ~1k data points) achieve around ~20% top-1 accuracy, but our approaches achieve 70%+. Considering production-scale models, on realistic and diverse classification tasks, with very small amounts of data, is a very useful regime to work on and we believe there are many useful insights here for practitioners and researchers alike.\", \"We recognise the kNN-score for model selection component of the algorithm isn\\u2019t a novel contribution, but believe that the application to selecting a set of models, to be combined later, and stress testing it (picking 15 from over 2000 models!) are highly valuable contributions.\", \"We further note that the simple yet highly performant heuristic for combining upstream and downstream diversity is also a novel technical contribution.\", \"As you noted, yes; one of the key messages of this paper is that in this regime, creating ensembles which leverage differences in pretraining perform better than models which exploit diversity that is generated on the downstream task. As far as we are aware, there is no other concurrent work which demonstrates this conclusion. We also propose heuristics for computationally efficient ways to get the best of both worlds - again, we do not find parallels in the literature.\", \"Departing from the low-data regime and top-1 accuracy, the robustness on ImageNet results are arguably even more important than raw top-1 accuracy in a controlled setting, and we believe it is a valuable contribution to show ways to tackle it.\", \"Points about related literature:\", \"Many thanks for pointing out some of these other papers! We will update our work to properly compare and contrast against these works.\", \"As far as we understand, LEEP, DEPARA, Dual Diagram Similarity, and Task2Vec all propose different ways of selecting models - but they all focus on selecting a single model for a given task. Those methods could all act as a drop-in replacement for the KNN selection phase of the proposed algorithm, and are thus complementary to our work - for example, if using LEEP in the model selection phase improved ensemble performance on downstream tasks, then we believe that this would only further verify the efficacy/performance of our approach, as opposed to being a literature baseline that we did not evaluate against.\", \"The performance of our approach does not directly necessitate the use of the KNN / the \\u2018pre-selection\\u2019 phase; we feel it was a valuable contribution to show that these sorts of approaches can help narrow down the pool of potential models, thus making the algorithm computationally feasible for many practitioners. However, one of our best results was just using a pool consisting of models pretrained upstream with different random seeds. This is analogous to practitioners using multiple different pretrained models (e.g. on ImageNet), which are widely available online.\", \"Lastly, the main findings and contributions of our paper - in relation to the superiority of upstream/combined diversity instead of downstream diversity, the impact on robustness to distribution shift and the proposed algorithm itself are all novel with respect to the papers mentioned.\", \"Comparing with simple baseline of fine-tuning with early stopping: This is effectively what the \\u2018single model\\u2019 baselines are. Previous efforts fine-tune a single model on each downstream task, and found that such a strategy applied to very large scale models was the most performant approach. We achieve higher performance using smaller models with a significantly lower inference time. Note that those models do not require early stopping as after significant study on VTAB they developed a hyperparameter heuristic rule which defines schedule length as a function of the downstream dataset; this is arguably an even stronger baseline than finetuning with early stopping. Furthermore, early stopping could be applied to the baseline as well as our ensemble models, and therefore we consider that a separate direction.\"]}",
"{\"title\": \"Recommendation to Reject based on limited novelty and lack of convincing experiments\", \"review\": \"[Summary] This paper presents different ways of creating ensembles from pre-trained models. Specifically, authors first utilize nearest-neighbor accuracy to to rank pre-trained models, then fine-tune the best ones with a small hyperparameter sweep, and finally greedily construct an ensemble to minimize validation cross-entropy. Experiments on the Visual Task Adaptation Benchmark show the efficacy of the approach in selecting few models within a computational budget.\\n\\n[Score] Overall, I found the paper is well-written with experiments using large-scale benchmarks such as JFT, ImageNet21K and VTAB datasets. I like the problem of model selection for transfer learning. However, my major concern is about the novelty of the paper including concerns regarding prior works. Given the lack of novelty and convincing experiments, I vote for rejecting the paper. Hopefully the authors can address my concerns in the rebuttal period. \\n\\n[Weaknesses] The technical novelty of the paper is very limited. Besides combining few prior methods (e.g., Puigcerver et al. (2020); Caruana et al. (2004)) and then performing large scale experiments on JFT/ImageNet21K datasets, what are the main contributions of the paper are not clear. Although I admit that papers on analysis or study of different methods are quite interesting, I failed to find any major insights from the study of different diverse ensemble techniques. Is the upstream pre-training achieves better accuracy than that from the downstream fine-tuning stage the major take away message of the paper? Authors should clearly explain the major contributions of the paper.\\n\\nThere are few recent papers which discuss model selection for transfer learning. E.g., Duality Diagram Similarity: a generic framework for initialization selection in task transfer learning, ECCV 2020; DEPARA: Deep Attribution Graph for Deep Knowledge Transferability, CVPR 2020. How is the proposed approach related to these prior works? These paper should be clearly discussed with proper comparison in the experiments. \\n\\nComparison with prior methods is not satisfactory. Authors should clearly discuss what are the different ways of selecting models and creating ensembles out of that in the experiments. Specifically, what are the different alternatives to KNN and greedy approach used to construct ensembles? What about the performance of those methods? How is the proposed simple approach comparable to them in terms of performance vs complexity. E.g., how is the proposed approach comparable to the pretrained model selection strategy based on Task2Vec: see TASK2VEC: Task Embedding for Meta-Learning?\", \"how_is_the_proposed_method_related_to_leep\": \"A new measure to evaluate transferability of learned representations? Furthermore, how is the current approach comparable to a simple baseline on fine-tuning with early stopping?\\n\\nFigure 1 is not clear and it is not described clearly anywhere in the paper. I would like the authors to clearly explain this figure either in the caption or text in the introduction section.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Papers needs more clarity, better writing and total computational cost justification.\", \"review\": \"Summary:\\nPaper proposed an ensemble learning approach for the low-data regime. Paper uses various sources of diversity - pre-training, fine-tuning and combined to create ensembles. It then uses nearest-neighbor accuracy to rank pre-trained models, fine-tune the best ones with a small hyper-parameter sweep, and greedily construct an ensemble to minimize validation cross-entropy. Paper claims to achieve state-of-the art performance with much lower inference budget.\", \"recommendation\": \"Based on my understanding of the paper I recommend a clear rejection. Please look at the details below:\", \"strength\": \"1) Authors have tried to lot of experiments and give summary of conclusion/results in section 4. \\n\\n2) Experimental setup is clear and the motivation is valid. \\n\\nWeakness/Questions: \\n1) Paper was very hard to read. I had to go back and forth between pages to make sense of what\\u2019s defined and make my own definitions in many cases. In some cases, terms are defined but never used and in other cases terns are never defined. For example, \\na) AugEnsembles: Where is this used?\\nb) ExpertEnsembles: Where is this defined? \\nc) HyperExperts: Where is this defined?\\nd) AugExperts: Where is this defined? \\n\\n2) In figure 2, Single-model SOTA has only one model. Do you have a graph for total cost (training + inference) vs VTAB_{1K} performance for all the models that are shown in figure 2? Only showing an inference budget may not tell the entire picture here. \\n\\n3) In Table 2, how is computational cost different for different sources of diversity (D, U and C)? If C needs more computational cost than U and D then is the comparison fair? \\n\\n4) Appendix A.2 mentions the hyper parameters used when using \\u201chyper ensembles\\u201d and then there is a default hyper parameter sweep - \\u201cDefault Hyper Parameter Sweep\\u201d in appendix A.1. \\nDid you find any pattern in the hyperparameters with the best model? How were the hyperparameters chosen for baselines in table 1?\", \"minor\": \"1) VTAB should have been defined just before listing contributions - \\n\\u201cnew form of diversity improves on the Visual Task Adaptation Benchmark (VTAB) SOTA by 1.8% (Zhai et al., 2019).\\n\\n2) Paper repeatedly cites Puigcerver et al 2020 [1] to justify experimental framework or as a follow up paper which is also very similar to the current paper in terms of motivation. \\n\\n[1] Puigcerver, Joan, Carlos Riquelme, Basil Mustafa, Cedric Renggli, Andr\\u00e9 Susano Pinto, Sylvain Gelly, Daniel Keysers, and Neil Houlsby. \\\"Scalable transfer learning with expert models.\\\" arXiv preprint arXiv:2009.13239 (2020).\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Three main \\\"contributions\\\" in its framework are all from exisiting (related) works!\", \"review\": \"This paper does achieve good performance but its method is quite about engineering using very intuitive training tricks that everybody could be able to use given a lot of GPU machines. I would not like to encourage such work to be published as a research paper.\", \"pros\": \"1. The proposed framework achieves a good performance compared to its related works.\\n\\n2. It is a good organization of a lot of training techniques, and a good reference for engineering.\", \"cons\": \"1. No technique contribution. The main framework of this submission is very similar to the existing work [Scalable Transfer Learning with Expert Models] which has not been officially published but only on arXiv. Besides the common methods of pre-training and ensembling, it involves three \\\"new\\\" methods in its main framework: the first one is kNN selection on pre-trained models (referred to the same technique in the work [Scalable Transfer Learning with Expert Models]); the second is the hyperensembles by fine-tuning multiple diverse copies of the models (referred to the hyperparameter sets used in another related work [Big transfer (BiT): General visual representation learning]); and the last is greedy ensemble (referred to the third related word [Ensemble selection from libraries of models]). Not sure what is the contribution of this submission.\\n\\n2. The paper is quite about engineering tricks or combinations of tricks. In addition, in terms of engineering, it is not fair to compare to related methods under the condition of using the same numbers of pre-trained models. A better way may be based on the total computational COSTS such as the max running epochs, the network architectures, the total training time under the same usage of GPU machines.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Deep Ensembles for Low-Data Transfer Learning\", \"review\": \"Update after author response: I appreciate the authors' efforts to address my concerns and to raise some interesting points I missed. I still find the paper's insights are lacking some novelty to be published, but I think that this line of research is worth it!\\n\\n\\n------------------------\\n\\n\\nIn the low data regime, the use of transfer learning techniques collides with a widely used strategy: training multiple models for different purposes, being the main obstacle the lack of a clear diversity source.\", \"this_paper_proposes_a_simple_method_to_circumvent_this_problem\": \"identifying pre-training itself as an easily accessible and valuable form of diversity and proposing the greedy combination of several pre-trained models. Experiments show that the proposed strategy can achieve state-of-the-art performance on 19 different tasks. One necessary assumption of the method is the availability of a large pool of related models and the ability to look at the target data to make a decision on which models to fine-tune.\\n\\nOne of the critical points of the method is the use of cheap proxy metrics which assess the suitability of a pre-trained model before training it. To this end, the paper proposes the use of leave-one-out nearest-neighbour accuracy.\", \"pros\": [\"The paper takes one of the most important issues of deep learning: training high-performance models in the low data regime.\", \"The results section is well structured and experiments are convincing. The proposed method is evaluated from several points of view.\"], \"cons\": [\"The paper refers to a previous publication (Puigcerver et al., 2020) and from this point of view, the proposal represents only an incremental step, with a low level of novelty.\", \"The description of the method is not very specific and it refers to other existing methods as the main steps (Puigcerver et al. (2020) and Caruana et al. (2004))\", \"The proposed method is based on heuristics and there are no hints about why it does work. Diversity is a generic concept and there is a large number of papers that have explored several measures of diversity in order to understand \\\"when\\\" and \\\"why\\\" it is helpful. I miss some references to this previous knowledge. See, for example: Bian, Yijun, and Huanhuan Chen. \\\"When does Diversity Help Generalization in Classification Ensembles?.\\\" arXiv (2019): arXiv-1910.\", \"My main concern is not about the results, which I think are good, but about the level of novelty with respect to some existing publications (mainly Puigcerver et al. (2020)) and the lack of experiments devoted to understanding the role of diversity. It is a well-known fact that diversity per se is not sufficient to build strong multiple classifiers, and different kinds of diversity measures are helpful to diagnose it.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
C0qJUx5dxFb | Neural networks with late-phase weights | [
"Johannes Von Oswald",
"Seijin Kobayashi",
"Joao Sacramento",
"Alexander Meulemans",
"Christian Henning",
"Benjamin F Grewe"
] | The largely successful method of training neural networks is to learn their weights using some variant of stochastic gradient descent (SGD). Here, we show that the solutions found by SGD can be further improved by ensembling a subset of the weights in late stages of learning. At the end of learning, we obtain back a single model by taking a spatial average in weight space. To avoid incurring increased computational costs, we investigate a family of low-dimensional late-phase weight models which interact multiplicatively with the remaining parameters. Our results show that augmenting standard models with late-phase weights improves generalization in established benchmarks such as CIFAR-10/100, ImageNet and enwik8. These findings are complemented with a theoretical analysis of a noisy quadratic problem which provides a simplified picture of the late phases of neural network learning. | [
"weights",
"neural networks",
"sgd",
"learning",
"weights neural networks",
"successful",
"variant",
"stochastic gradient descent",
"solutions",
"subset"
] | Accept (Poster) | https://openreview.net/pdf?id=C0qJUx5dxFb | https://openreview.net/forum?id=C0qJUx5dxFb | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"NKr7yJIdlnY",
"W6y-DrwYyOx",
"_OY-riiMApI",
"FtAOgomT-Qj",
"7wt0tUefyW0",
"8Q9GhNIWa9W",
"kuYtSDdaY5l",
"Helod4zrUW",
"9lJ1gRATQAj",
"4QBtHoLDWRQ",
"C_XP3BcBHOZ",
"gLngvq_JbCp",
"1XkoZO8CW-k",
"Lr4myJ0ilEH",
"0pqT3ci2kdF",
"rIWp-yPZ01n",
"2G9emdue6Pa",
"YdZYlJKa-a",
"W3KnwJKuqj2",
"7vdGSI01_iN"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040449123,
1606125500208,
1606061140758,
1606051122506,
1605954381996,
1605868508388,
1605816135831,
1605805639731,
1605805532208,
1605805425631,
1605804387784,
1605804281388,
1605804072901,
1605803978979,
1605802502173,
1605802437050,
1603932131994,
1603866886373,
1603860231897,
1603827262555
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3621/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3621/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3621/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3621/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3621/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3621/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3621/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3621/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3621/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3621/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3621/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3621/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3621/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3621/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3621/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3621/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3621/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3621/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3621/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper proposes to learn an ensemble of weights given a set of base weights from some point late in normal training. The authors apply this approach to a number of configurations and find modest performance improvements for normal test settings and larger improvements for out of distribution settings. While reviewers had some concerns about the size of the improvement relative to baselines, all reviewers agreed that the proposed method is interesting and will likely impact future work, especially given the new experiments provided by the authors. I recommend that the paper be accepted.\"}",
"{\"title\": \"Thank you\", \"comment\": \"We are very grateful for your fast reevaluation and happy to read your positive opinion of our work.\\n\\n> Thanks for running this experiment! The results are a bit surprising to me, that using more memory does not equate to better performance.\\n\\nA potential explanation is that we trained this large-memory late-phase ensemble with as little data as a single model, to match our late-phase weights setup (so each ensemble member only sees $1/K$ data points). This result emphasizes the importance of selecting a low-dimensional, expressive set of late-phase weights, which do not require (or at least not many) additional passes through the dataset. This will be clarified in the text.\\n\\n> minor nit: \\\"we allow for this increased budget exceptionally for BatchEnsemble\\\" is a bit unclear in the appendix\\n\\nWe will clarify the Appendix, explaining that we trained the BatchEnsemble exceptionally for 250 epochs (instead of 200), as the hyperparameters provided by the authors were tuned for this larger training duration.\"}",
"{\"title\": \"Good improvements\", \"comment\": \"I\\u2019ve read the other reviews and the author\\u2019s responses. I think the authors did a thorough job during the rebuttal period and added many experiments/improved clarity in the revision. The resulting paper tells a more compelling story, and I believe should be accepted at ICLR. I have updated my score.\\n\\n\\n>This is a great additional experiment that we ran, maintaining the same data consumption as vanilla SGD. Performance still increases (CIFAR-100 82.17, CIFAR-10 96.32) compared to the SGD baseline (at the expense of the additional memory consumption of a DeepEnsemble), but not as much as when using our proposed late-phase weights (CIFAR-100 82.87, CIFAR-10 96.46).\\n\\nThanks for running this experiment! The results are a bit surprising to me, that using more memory does not equate to better performance.\", \"minor_nit\": \"\\u201cwe allow for this increased budget exceptionally for BatchEnsemble\\u201d is a bit unclear in the appendix\"}",
"{\"title\": \"Code added as SM\", \"comment\": \"Thank you for your quick response. We have now clarified Algorithm 4: the main loop runs while $t \\\\leq T$, where $T$ is the total number of minibatches of data consumed. We would like to stress that, in all our experiments, our algorithm passes the same number of times through the dataset as standard optimization of a single model would do. We do not train our late-phase weights for any longer (see also Table 17 for an absolute runtime comparison in seconds).\\nFurthermore, we have attached code to reproduce our main WRN 28-10, WRN 28-14 (predictive test set and aggregate OOD scores) and ImageNet results as a supplementary material zip. Code for reproducing the remaining experiments will be uploaded with the final version of the paper.\\n\\nDo not hesitate in contacting us if you have any additional questions.\"}",
"{\"title\": \"concern about reproducibility\", \"comment\": \"Thanks for the revision. I still have some concern regarding the late-phase training algorithm, as mentioned in Alg 4 of Fig 5 (on page 16), what is the stoppling criteria of the method (i.e. in the while loop, what does it mean not converged)? If all the main results could be made reproducible, I'd be happy to raise my score.\"}",
"{\"title\": \"Thank you\", \"comment\": \"Thank you for your prompt reassessment of our work, we very much appreciate it.\"}",
"{\"title\": \"response to authors\", \"comment\": \"Thank you for the follow-up experiments and revised writing.\\nI will increase my score to '7', since the experimental section is now thorough enough that I believe it provides value to the community.\\n\\nAlthough the absolute improvement is not a breakthrough, it is hard to make large progress in this field, and I believe this paper contributes to our understanding of the impact of ensembling (and \\\"late-phase\\\" ensembling).\\nI believe the experiments present in this paper will save time for future researchers interested in similar questions (since they are now more extensive, and compared to existing methods), and this could be built upon in future work.\"}",
"{\"title\": \"Reply to AnonReviewer3 (2/2)\", \"comment\": \"* *As the improvement in Section 3.3 seems marginal compared to the baseline and the standard deviation, it thus does not fully support the effectiveness of the batch normalization layers.*\\n\\nThank you for your assessment. We have now updated and expanded Section 3.3 with additional models which better showcase the effectiveness of late-phase batch normalization. The reviewer can find some of the new results in the table below, and in Table 5. We now show significant gains in accuracy (up to 0.4%), especially when considering that we are fine-tuning an off-the-shelf pretrained network. \\n\\n+---------------+--------------------------+-------------------+--------------------+\\\\\\n| Dataset | Model | Base | Late-phase |\\\\\\n+---------------+--------------------------+-------------------+--------------------+\\\\\\n| ImageNet | ResNet-152 | 78.37 +/- 0.01 | 78.77 +/- 0.01 |\\\\\\n| CIFAR-10 | WRN28-14 (SWA) | 96.75 +/- 0.05 | 97.45 +/- 0.10 |\\\\\\n| CIFAR-100 | WRN28-14 (SWA) | 84.01 +/- 0.29 | 85.00 +/- 0.25 |\\\\\\n+---------------+--------------------------+-------------------+--------------------+\\n\\nWe showcase the efficiency of late-phase BatchNorm weights in fine-tuning ImageNet, but we see it evidently on full training. In particular, on a WRN 28-14 (Table 3), we increase accuracy from 96.75% (SWA) to 97.45% (Late-phase+SWA) on CIFAR-10, and from 84.01% (SWA) to 85.00% (Late-phase+SWA) on CIFAR-100. Both are very high accuracies for ResNet models (in PWC [1, 2] this would be the best published result for CIFAR-100; on CIFAR-10 we would be second.).\\n\\n* *In terms of writing, I would recommend to write out the full algorithm of Alg. 1 or at least in the Appendix, including the variant of the SGD momentum and Adam. The SWA is also worth writing out clearly, which is not clear to the reader.*\\n\\nDone, this is a good suggestion, in particular in regards to SWA, as the paper did not stand on its own. We now present lower-level pseudocode in Appendix A (Figures 4 and 5).\\n\\n* *Overall, I think both the methodology and the writing need to be improved.*\\n \\nAs you can see from our answers to the other reviewers we have taken several steps to improve the clarity, readability and methodology of the revised paper. If writing and methodology are still a concern it would help us if you could point more specifically to where you think further improvements are required.\\n\\nFinally, we welcome the reviewer to take a second look into the paper and to possibly reassess his rather low rating with a particular focus on our revised text and the stronger experimental section. We remain open to any criticism and feedback on parts that could still be improved.\\n\\n[1] https://paperswithcode.com/sota/image-classification-on-cifar-100 \\\\\\n[2] https://paperswithcode.com/sota/image-classification-on-cifar-10\"}",
"{\"title\": \"Reply to AnonReviewer3 (1/2)\", \"comment\": \"Thank you for your review. As detailed below, we have tried to address your criticism to the best of our knowledge, and we remain open to any questions that you may have, that can help you raise your score.\\n\\n* *I find that this approach is quite sensitive the choice of the hyper-parameters, such as the beginning of the late-phase T0, and the noise perturbation sigma0. It is written in Section 2.1 that in practice \\u2026 sigma0>0 yields a set of models \\u2026 this results in improved final generalization. However, in the result of ImageNet in Section 3.3, the sigma0 equals to 0. Thus, it is not conclusive that sigma0>0 is better.*\\n\\nThank you for this comment. We recognize the seeming inconsistency of our choice of $\\\\sigma_0$, and now present our results on CIFAR and ImageNet in the main text all using a consistent choice of $\\\\sigma_0 = 0$ (and $T_0$, for non-pretrained models). A non-zero $\\\\sigma_0 = 0.5$ is now employed only to generate a diverse late-phase ensemble (see Section 3.2, Table 4) that is not averaged in weight space. This method shows strong OOD performance in comparison to other comparable techniques that are efficiently-trained but still require to integrate predictions during inference (SWAG, MC-dropout, BatchEnsemble). An overview of the updated performance can be found in the table below.\\n\\n\\\\\\n+-------------------------------------------------------------------+--------------------+---------------------+-------------------+\\\\\\n|\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|Testacc.(%)\\t\\u2003| \\u2003OOD\\u2003\\u2003\\t\\u2003|\\u2003\\tmCE\\u2003\\u2003\\u2003\\t|\\\\\\n+-------------------------------------------------------------------+--------------------+---------------------+-------------------+\\\\\\n|\\u2003Base(SGD)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.35+/-0.16\\u2003|0.802+/-0.019\\u2003|47.84+/-0.41\\u2003|\\\\\\n|\\u2003Dropout(Mean)(SGD)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.31+/-0.20\\u2003|0.802+/-0.030\\u2003|48.97+/-0.33\\u2003|\\\\\\n|\\u2003Late-phase BatchNorm(SGD) \\u2003\\u2003\\u2003\\u2003 \\n \\u2003\\u2003|82.87+/-0.14\\u2003|0.836+/-0.012\\u2003|45.59+/-0.25\\u2003|\\\\\\n|\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003MC-Dropout(SGD) \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.55+/-0.11\\u2003|0.823+/-0.049\\u2003|48.09+/-0.36\\u2003|\\\\\\n|\\u2003SWAG(SWA)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003 |82.12+/-0.03\\u2003|0.828+/-0.027\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003BatchEnsemble(SGD)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.25+/-0.10\\u2003|0.829+/-0.019\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003Late-phase BatchNorm(SGD,non-averaged) |82.71+/-0.10\\u2003|0.862+/-0.009\\u2003|46.21+/-0.29\\u2003|\\\\\\n|\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003Deepensemble (SGD) \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003 |84.09 \\u2003\\u2003\\u2003\\u2003|0.8312 \\u2003\\u2003\\u2003\\u2003|44.21\\u2003\\u2003\\u2003\\u2003\\t|\\\\\\n|\\u2003Deepensemble (Late-phaseBatchNorm,SGD) |84.69 \\u2003\\u2003\\u2003\\u2003|0.8575 \\u2003\\u2003\\u2003\\u2003|43.15\\u2003\\u2003\\u2003\\u2003\\t|\\\\\\n+-------------------------------------------------------------------+--------------------+---------------------+-------------------+\\n\\n\\nFor more details on hyperparameter sensitivity, we would like to refer to our analyses (Table 12 and Figure 6) supporting that our method is robust to the choice of $T_0$. Here we show that for a large range of $T_0$, i.e., $T_0 > 80$, we improve on top of the baseline (CIFAR-10 - 96.16% and CIFAR-100 - 81.31%) in test set accuracy and out-of-distribution detection. See a small excerpt of the analyses here:\\n\\n+---------+------------------------+---------------------+ \\\\\\n| T_0 | CIFAR10 | CIFAR100 | \\\\\\n+---------+------------------------+---------------------+\\\\\\n| 40 | 96.34 +/- 0.08 | 79.69 +/- 0.11 |\\\\\\n| 80 | 96.50 +/- 0.11 | 81.72 +/- 0.18 |\\\\\\n| 100 | 96.45 +/- 0.08 | 82.48 +/- 0.21 |\\\\\\n| 120 | 96.48 +/- 0.20 | 82.87 +/- 0.18 |\\\\\\n| 140 | 96.26 +/- 0.17 | 82.53 +/- 0.21 |\\\\\\n| 160 | 96.23 +/- 0.11 | 81.41 +/- 0.31 |\\\\\\n+----------+-----------------------+----------------------+\"}",
"{\"title\": \"Reply to AnonReviewer4 (2/2)\", \"comment\": \"* *Currently aggregate results are shown in Table 4, but it would be good to explicitly see, for example: how the performance of this method degrades with increasing CIFAR-10C corruption severity, as opposed to Deep Ensembles. Also, reporting the Mean Corruption Error (mCE) for each dataset individually will allow standard comparison to prior methods.*\\n\\nThank you. As suggested, we computed the mCE on CIFAR-100-C:\\n\\n\\\\\\n+-------------------------------------------------------------------+--------------------+---------------------+-------------------+\\\\\\n|\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|Testacc.(%)\\t\\u2003| \\u2003OOD\\u2003\\u2003\\t\\u2003|\\u2003\\tmCE\\u2003\\u2003\\u2003\\t|\\\\\\n+-------------------------------------------------------------------+--------------------+---------------------+-------------------+\\\\\\n|\\u2003Base(SGD)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.35+/-0.16\\u2003|0.802+/-0.019\\u2003|47.84+/-0.41\\u2003|\\\\\\n|\\u2003Dropout(Mean)(SGD)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.31+/-0.20\\u2003|0.802+/-0.030\\u2003|48.97+/-0.33\\u2003|\\\\\\n|\\u2003Late-phase BatchNorm(SGD) \\u2003\\u2003\\u2003\\u2003 \\n \\u2003\\u2003|82.87+/-0.14\\u2003|0.836+/-0.012\\u2003|45.59+/-0.25\\u2003|\\\\\\n|\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003MC-Dropout(SGD) \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.55+/-0.11\\u2003|0.823+/-0.049\\u2003|48.09+/-0.36\\u2003|\\\\\\n|\\u2003SWAG(SWA)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003 |82.12+/-0.03\\u2003|0.828+/-0.027\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003BatchEnsemble(SGD)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.25+/-0.10\\u2003|0.829+/-0.019\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003Late-phase BatchNorm(SGD,non-averaged) |82.71+/-0.10\\u2003|0.862+/-0.009\\u2003|46.21+/-0.29\\u2003|\\\\\\n|\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003Deepensemble (SGD) \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003 |84.09 \\u2003\\u2003\\u2003\\u2003|0.8312 \\u2003\\u2003\\u2003\\u2003|44.21\\u2003\\u2003\\u2003\\u2003\\t|\\\\\\n|\\u2003Deepensemble (Late-phaseBatchNorm,SGD) |84.69 \\u2003\\u2003\\u2003\\u2003|0.8575 \\u2003\\u2003\\u2003\\u2003|43.15\\u2003\\u2003\\u2003\\u2003\\t|\\\\\\n+-------------------------------------------------------------------+--------------------+---------------------+-------------------+\\n\\nThe full table can be found in the Appendix B, together with all non-aggregate OOD results (Table 16).\\n\\n* *It seems that starting the ensembling at a \\\"late phase\\\" in training is the main contribution of this work. This could be applied to any ensemble method, and you propose several explicit instantiations.*\\n\\nFollowing this viewpoint, we now show that skipping the final weight averaging step and maintaining the late-phase-initiated ensemble at the end of training can be beneficial in OOD problems, as described above.\\n\\nIn addition, aligned with this view and following AnonReviewer1\\u2019s suggestion, we also ran a new experiment using a (non-memory-efficient) full DeepEnsemble created in a late-phase (reported inline in Section 3.2) and trained with the same data consumption as a single model. This did not improve performance as strongly as our multiplicative late-phase models, highlighting the importance of an appropriately-chosen low-dimensional set of late-phase weights.\\n\\n* *[...] to further investigate the role of T0 (the time at which ensembling starts)*\\n\\nIn the revised version of the paper, we now present in Table 12 and Figure 5 an extended analysis on $T_0$, which is indeed the main hyperparameter of the algorithm; in particular, it should not be set too early (which motivated the late-phase term in our algorithm).\\n\\nTo summarize, we have addressed all major concerns and if you agree that our paper has significantly improved we would be grateful for a reassessment of the work and rating. We are happy to answer any further questions that you would like to see addressed.\"}",
"{\"title\": \"Reply to AnonReviewer4 (1/2)\", \"comment\": \"Thank you for your thorough review and constructive criticism. We have followed your specific suggestions and expanded our OOD experiments, enlarged our coverage of models, and considered alternative use cases involving ensembling the solutions found with our method. We reply point-to-point below:\\n\\n* *The experimental results are somewhat limited, but appear to be competitive with current efficient-ensembling approaches like SWA/SWAG. The absolute improvement of this method is not very large (<0.3% on CIFAR, <0.2% on imagenet), and there is a large gap to Deep Ensembles. [...] The experimental section would be greatly strengthened by additional experiments for different models and settings.*\\n\\n\\nWe have now extended our experiments to include efficient-ensembling baselines (dropout, MC-dropout, BatchEnsemble) and new network architectures showing that our results are in fact strong compared to the performance increases achieved with other methods.\\nBelow, we highlight our current results for larger models (test set accuracy shown in %):\\n\\n+---------------+--------------------------+-------------------+--------------------+\\\\\\n| Dataset | Model | Base | Late-phase |\\\\\\n+---------------+--------------------------+-------------------+--------------------+\\\\\\n| ImageNet | ResNet-152 | 78.37 +/- 0.01 | 78.77 +/- 0.01 |\\\\\\n| CIFAR-10 | WRN28-14 (SWA) | 96.75 +/- 0.05 | 97.45 +/- 0.10 |\\\\\\n| CIFAR-100 | WRN28-14 (SWA) | 84.01 +/- 0.29 | 85.00 +/- 0.25 |\\\\\\n+---------------+--------------------------+-------------------+--------------------+\\n\\nThese WRN 28-14 gains, obtained on top of SWA, place us among the best results for WRNs reported in PWC [1, 2]. The improvement on ImageNet is significant for a fine-tuning method.\\n\\nFurther, we would like to highlight that since our method yields a single model (unlike MC-dropout or BatchEnsemble), it can be used to obtain a stronger DeepEnsemble in a straightforward manner. To showcase this we now present a proof-of-concept experiment showing that this is another possible use case, if one has the resources to build a DeepEnsemble (CIFAR-10 in Table 1 & CIFAR-100 in Table 4).\\n\\n* *I weakly recommend acceptance, because the method appears promising for future work, and the experiments seem correct.*\\n\\nThank you for the encouraging feedback. In light of your and the other reviewers\\u2019 comments we have performed new experiments and added new baselines to our work (see general comment). This has significantly strengthened the paper and we hope that you now find it worthy of a \\u2018clear acceptance\\u2019.\\n\\n* *There is also a theory section included, though I am generally unconvinced by results in such simple toy examples. (such settings can usually be contrived to exhibit any desired behavior).*\\n\\nThank you for this comment -- we have rewritten Section 3.1, to clarify and discuss the main contributions and implications of our new theory. We agree that going beyond the NQP would be desirable, but with the current analytical tools that is likely out-of-reach and beyond the scope of this work.\\n\\n* *There are only 2 architectures tested on CIFAR-10, for example. It would also be informative to see the performance of these methods in \\\"harder\\\" settings -- for example, CIFAR-10 with fewer train samples.*\\n\\nWe thank you for this comment and now include results when training with less (a fifth) CIFAR-10 examples (Table 15). These complement our new results obtained on a larger WRN (the WRN 28-14, Table 3 and table above) on CIFAR-10. In both cases, our implicit regularization leads to performance gains (respectively +0.6%, +0.7%) that are higher than on the WRN 28-10 (+0.3%).\\n\\n* *The OOD uncertainty results could be expanded. Uncertainty estimation and robustness are some of the most relevant practical uses of ensemble methods, so it is especially important to evaluate ensembles in this context.*\\n\\nWe fully agree and have expanded our OOD section in response to your comment. We have improved our scores and considered yet another use case for our model: maintaining our late-phase ensemble at the end of training. Like SWAG, MC-dropout and BatchEnsemble, in this case, we integrate predictions across an efficiently-trained ensemble. This resulted in strongly improved OOD scores for our method (Table 4).\"}",
"{\"title\": \"Reply to AnonReviewer2 (2/2)\", \"comment\": \"* *I wonder as discussed by the authors, this is due to mostly the benefit of ensembles is through incorporating different modes as argued in [Fort et al., 2020] rather than a single mode.*\\n\\nYes; we now provide improved OOD results using a late-phase weight ensemble obtained with large initialization noise, where weight averaging fails (Table 4), lending further credibility to this hypothesis.\\n\\n* *While $\\\\sigma_0 $ and $T_0$ are hyperparameters of the algorithm, no good way to determine it is explained.*\\n\\nTo get best results these hyperparameters should be indeed tuned on a validation set. We note that this equally applies to e.g. the dropout probability (incidentally, we also present a supplementary analysis for the sensitivity of dropout to its hyperparameter, Table 13).\\n\\nWe therefore performed a finer hyperparameter scan showing that good results can be achieved when tuning only $T_0$, and that the range of optimal $T_0$ is not overly narrow (cf. Table 12, Figure 5). Please take into consideration the new robustness analyses of $T_0$ in Table 12 and Figure 6 in the appendix. Here we show, that for a large range of $T_0$, i.e. $T_0 > 80$, we improve on top of the baseline (CIFAR-10 - 96.16% and CIFAR-100 - 81.31%) in test set accuracy and out-of-distribution detection. We provide a small excerpt of the analyses below:\\n\\n+---------+------------------------+---------------------+ \\\\\\n| T_0 | CIFAR10 | CIFAR100 | \\\\\\n+---------+------------------------+---------------------+\\\\\\n| 40 | 96.34 +/- 0.08 | 79.69 +/- 0.11 |\\\\\\n| 80 | 96.50 +/- 0.11 | 81.72 +/- 0.18 |\\\\\\n| 100 | 96.45 +/- 0.08 | 82.48 +/- 0.21 |\\\\\\n| 120 | 96.48 +/- 0.20 | 82.87 +/- 0.18 |\\\\\\n| 140 | 96.26 +/- 0.17 | 82.53 +/- 0.21 |\\\\\\n| 160 | 96.23 +/- 0.11 | 81.41 +/- 0.31 |\\\\\\n+----------+-----------------------+----------------------+\\n\\n\\n\\n* *The role of section 3.1 is not clear. For one thing, the legend in Figure 1 is confusing where the role of non-integer K is mysterious to me. I would suggest clarifying what the message of the section would be in context of understanding late-phase weight models.*\\n\\nWe followed the reviewer\\u2019s suggestion and rewrote this section, discussing directly in the main text our analytical results, as analytical tractability was the primary motivation for studying the noisy quadratic problem. In addition, we replaced the figure by a new one which directly validates the theoretical claims (Figure 1). The previous legend was indeed unclear (the non-integer number referred to the learning rate).\\n\\n* *Was \\u201cLate-phase classification layers\\u201d ever evaluated or discussed in the main paper? I find some discussion on the appendix but seem to be missing in the main text.*\\n\\nThey were presented as the last type of late-phase weight models (in Section 2.2). In addition, we now mention their use again on the results subsection.\\n\\nIn summary, we have verified the hyperparameter robustness of our approach, we now provide stronger performance results and we improved the clarity of the revised paper. If you agree with this improvement we would welcome any reassessment of our work and the rather low rating. We remain open to discuss and will address any remaining concerns regarding your assessment of our work.\\n\\n[1] https://paperswithcode.com/sota/image-classification-on-cifar-100 \\\\\\n[2] https://paperswithcode.com/sota/image-classification-on-cifar-10\"}",
"{\"title\": \"Reply to AnonReviewer2 (1/2)\", \"comment\": \"Thank you for your thorough review and feedback. We ran a number of new experiments, which include an extended set of OOD results, the study of more network architectures, and new well-known baselines (dropout, MC-dropout, and BatchEnsemble) with comparable computational and memory requirements to our method. We reply to your comments point by point:\\n\\n* *BatchNorm late-phase seems to work well which is widely used among vision models so easily applicable. Also since late-phase can be applied post-pretraining, it can be used to improve pre-trained models.*\\n\\nTo facilitate broad adoption of our method we will provide a PyTorch drop-in replacement for a standard BatchNorm layer with the final version of the paper. Our aim is to make our method as easy to implement as other well-established complementary techniques like dropout or SWA.\\n\\n* *The idea of weight averaging is not so novel as duly noted by the authors.*\\n\\nWhile previous optimization algorithms employ various sorts of weight averaging (notably, SWA and Polyak averaging, which maintain running temporal averages), we would like to highlight that our approach differs in that we take a single, simple spatial average in a low-dimensional weight space. This simple averaging is made possible thanks to the late-phase ensembling that we introduce in our paper, as corroborated by our experiments where $T_0$ is varied (Table 12, Figure 5).\\n* *While the paper discusses efficient ways of utilizing late-phase weight ensemble and improving SGD training, the demonstrated benefit is not significant enough for practitioners to pursue the method. Without strong practical application potential, merit of the proposed method is weak since it does not obviously elucidate some aspects of neural network training. \\n\\u2026.\\nMain question arises for the paper is whether the proposed method is worth the effort. While all experiments show that the proposed method improves the baseline somewhat, deep ensemble baselines remain strong. Also quoted difference between methods does not mean statistically significant effect [...]. Results reported in Table 1, CIFAR-10 in WRN, a significant figure with a 10k test set should be around 0.2% and differences between different methods are at best marginal. This can be applied to most tables and except for Deep Ensemble\\u2019s improvement other differences are not very significant.*\\n\\nTo convince you that our paper is worth accepting we would like to point out our improved results (Tables 2, 3, 4 and 5 in the main text). Here, we would like to highlight that on a WRN 28-14 we increase accuracy from 96.75% (SWA) to 97.45% (Late-phase+SWA) on CIFAR-10, and from 84.01% (SWA) to 85.00% (Late-phase+SWA) on CIFAR-100. These high accuracies place us among the very best available results for WRNs in PWC [1, 2] (1st place on CIFAR-100 and 2nd place on CIFAR-10). Furthermore, these results also show our method and SWA, one of the strongest methods for improving generalization in neural networks, are complementary.\\n\\nOn the point raised over statistical significance, we stress that our CIFAR results are obtained with a consistent choice of $T_0$ and $\\\\sigma_0$, across a number of different architectures.\", \"our_updated_results_also_show_that_our_method_achieves_strong_performance_in_out_of_distribution_detection_problems_and_it_is_robust_to_input_data_corruptions\": \"(Table 4 and 16 in the paper).\\n\\\\\\n+-------------------------------------------------------------------+--------------------+---------------------+-------------------+\\\\\\n|\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|Testacc.(%)\\t\\u2003| \\u2003OOD\\u2003\\u2003\\t\\u2003|\\u2003\\tmCE\\u2003\\u2003\\u2003\\t|\\\\\\n+-------------------------------------------------------------------+--------------------+---------------------+-------------------+\\\\\\n|\\u2003Base(SGD)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.35+/-0.16\\u2003|0.802+/-0.019\\u2003|47.84+/-0.41\\u2003|\\\\\\n|\\u2003Dropout(Mean)(SGD)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.31+/-0.20\\u2003|0.802+/-0.030\\u2003|48.97+/-0.33\\u2003|\\\\\\n|\\u2003Late-phase BatchNorm(SGD) \\u2003\\u2003\\u2003\\u2003 \\n \\u2003\\u2003|82.87+/-0.14\\u2003|0.836+/-0.012\\u2003|45.59+/-0.25\\u2003|\\\\\\n|\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003MC-Dropout(SGD) \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.55+/-0.11\\u2003|0.823+/-0.049\\u2003|48.09+/-0.36\\u2003|\\\\\\n|\\u2003SWAG(SWA)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003 |82.12+/-0.03\\u2003|0.828+/-0.027\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003BatchEnsemble(SGD)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.25+/-0.10\\u2003|0.829+/-0.019\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003Late-phase BatchNorm(SGD,non-averaged) |82.71+/-0.10\\u2003|0.862+/-0.009\\u2003|46.21+/-0.29\\u2003|\\\\\\n|\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003Deepensemble (SGD) \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003 |84.09 \\u2003\\u2003\\u2003\\u2003|0.8312 \\u2003\\u2003\\u2003\\u2003|44.21\\u2003\\u2003\\u2003\\u2003\\t|\\\\\\n|\\u2003Deepensemble (Late-phaseBatchNorm,SGD) |84.69 \\u2003\\u2003\\u2003\\u2003|0.8575 \\u2003\\u2003\\u2003\\u2003|43.15\\u2003\\u2003\\u2003\\u2003\\t|\\\\\\n+-------------------------------------------------------------------+--------------------+---------------------+-------------------+\"}",
"{\"title\": \"Reply to AnonReviewer1 (2/2)\", \"comment\": \"We would also like to point the reviewer\\u2019s attention to our new experiments in the OOD section, with additional new experiments on corrupted data. We have improved the performance of our late-phase averaged model and we now also consider maintaining a late-phase ensemble learned with large $\\\\sigma_0$. This results in significantly improved OOD scores and outperforms deep ensembles (Table 4 and 16 in the paper).\\n\\\\\\n+-------------------------------------------------------------------+--------------------+---------------------+-------------------+\\\\\\n|\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|Testacc.(%)\\t\\u2003| \\u2003OOD\\u2003\\u2003\\t\\u2003|\\u2003\\tmCE\\u2003\\u2003\\u2003\\t|\\\\\\n+-------------------------------------------------------------------+--------------------+---------------------+-------------------+\\\\\\n|\\u2003Base(SGD)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.35+/-0.16\\u2003|0.802+/-0.019\\u2003|47.84+/-0.41\\u2003|\\\\\\n|\\u2003Dropout(Mean)(SGD)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.31+/-0.20\\u2003|0.802+/-0.030\\u2003|48.97+/-0.33\\u2003|\\\\\\n|\\u2003Late-phase BatchNorm(SGD) \\u2003\\u2003\\u2003\\u2003 \\n \\u2003\\u2003|82.87+/-0.14\\u2003|0.836+/-0.012\\u2003|45.59+/-0.25\\u2003|\\\\\\n|\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003MC-Dropout(SGD) \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.55+/-0.11\\u2003|0.823+/-0.049\\u2003|48.09+/-0.36\\u2003|\\\\\\n|\\u2003SWAG(SWA)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003 |82.12+/-0.03\\u2003|0.828+/-0.027\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003BatchEnsemble(SGD)\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|81.25+/-0.10\\u2003|0.829+/-0.019\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003Late-phase BatchNorm(SGD,non-averaged) |82.71+/-0.10\\u2003|0.862+/-0.009\\u2003|46.21+/-0.29\\u2003|\\\\\\n|\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003| \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003|\\\\\\n|\\u2003Deepensemble (SGD) \\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003\\u2003 |84.09 \\u2003\\u2003\\u2003\\u2003|0.8312 \\u2003\\u2003\\u2003\\u2003|44.21\\u2003\\u2003\\u2003\\u2003\\t|\\\\\\n|\\u2003Deepensemble (Late-phaseBatchNorm,SGD) |84.69 \\u2003\\u2003\\u2003\\u2003|0.8575 \\u2003\\u2003\\u2003\\u2003|43.15\\u2003\\u2003\\u2003\\u2003\\t|\\\\\\n+-------------------------------------------------------------------+--------------------+---------------------+-------------------+\\n\\n* *On the ImageNet experiments, what is the validation accuracy of the pre-trained model?*\\n\\nWe now include this additional column in Table 5; it\\u2019s below the SGD baseline, hinting that the optimization of the original model was stopped too early and/or that the restart (reset of the internal state of the optimizer) is beneficial. We also kindly point out our updated ImageNet results (e.g., we increase accuracy from 78.37% to 78.77% on a deeper ResNet-152).\\n\\n* *Can you comment on the computational and memory complexity of your algorithm versus vanilla SGD?*\\n\\nExcept when using the hypernetwork weight interaction function (which requires additional tensor products to generate weights), our model results in essentially the same memory and computational complexity of vanilla SGD on one model. We now provide runtimes in Table 17 to make this explicit and emphasize this important feature more strongly in the text.\\n\\nIn the comparisons between late phase weights and SGD, do both algorithms consume the same amount of data? If so, this would be good to mention.\\nYes. Thank you, this important point has been clarified: \\u201cAll evaluated methods are trained using the same amount of data.\\u201d (Section 3.2).\\n\\n* *Could the entire network be treated as \\\"late-phase weights\\\"? Would this help performance?*\\n\\nThank you for this suggestion. This is a great additional experiment that we ran, maintaining the same data consumption as vanilla SGD. Performance still increases (CIFAR-100 82.17, CIFAR-10 96.32) compared to the SGD baseline (at the expense of the additional memory consumption of a DeepEnsemble), but not as much as when using our proposed late-phase weights (CIFAR-100 82.87, CIFAR-10 96.46). This is possibly explained by the low-dimensionality of our late-phase weight models, which can be efficiently trained using the same data consumption as a single model. We now discuss these new findings in Section 3.2, CIFAR-10 and CIFAR-100 paragraphs.\\n\\n* *In Algorithm 1: How does the loss function consume three inputs? This is different from when it is initially described.*\\n\\nWe clarified this notation.\\n\\n* *It's a bit unclear what is being compared in Figure 2.*\\n\\nWe reformulated the caption of the figure.\\n\\nGiven the additional experiments, baseline comparisons and the new strong performance results we remain open to additional questions and suggestions and welcome any reevaluation of your assessment given you agree that our paper is now much stronger.\"}",
"{\"title\": \"Reply to AnonReviewer1 (1/2)\", \"comment\": \"Thank you for the thorough and encouraging review. We have carried out additional experiments and reworked the paper following your feedback listed point-to-point below:\\n\\n* *The weight interaction functions should be more explicitly defined rather than just described in text.*\\n\\nThank you for this comment, indeed the BatchNorm and last-layer late-phase weights were missing explicit formulas, which made the presentation in Sect. 2.2 less clear. We have corrected this.\\n\\n* *I think there should be more discussion on the choice of $T_0$. For example, in table 1, why does SGD perform worse when $T_0=0$? It would be good to get a sense of robustness to this hyperparameter.*\\n\\nWe have extended our $T_0$ sensitivity analysis to CIFAR-10 and searched using a finer step size (Table 12, see also Figure 5); our analysis reveals that this hyperparameter is robust, performance increases as long as it is set to a late-training value. Below, we present a slimmed-down version of Table 12 which shows that for $T_0$ > 80 epochs we improve on top of the baseline in both CIFAR10 (96.16+/-0.12) and CIFAR100 (81.31+/-0.16) in test set accuracy (and out-of-distribution detection, see Figure 6 in the appendix).\\n\\n+---------+------------------------+---------------------+ \\\\\\n| T_0 | CIFAR10 | CIFAR100 | \\\\\\n+---------+------------------------+---------------------+\\\\\\n| 40 | 96.34 +/- 0.08 | 79.69 +/- 0.11 |\\\\\\n| 80 | 96.50 +/- 0.11 | 81.72 +/- 0.18 |\\\\\\n| 100 | 96.45 +/- 0.08 | 82.48 +/- 0.21 |\\\\\\n| 120 | 96.48 +/- 0.20 | 82.87 +/- 0.18 |\\\\\\n| 140 | 96.26 +/- 0.17 | 82.53 +/- 0.21 |\\\\\\n| 160 | 96.23 +/- 0.11 | 81.41 +/- 0.31 |\\\\\\n+----------+-----------------------+----------------------+\\n\\nWe use the same hyperparameter value on both CIFAR-10 and CIFAR-100 and across the different architectures considered.\\n\\n* *Good results on CIFAR. Late-phase weights are shown to boost performance over SGD and to be complementary with SWA. There are some benefits in the OOD setting as well.*\\n\\nThank you for this appreciation of our results. We have strengthened our CIFAR experiments by considering additional models. On a WRN 28-14 (CIFAR-10), we increase accuracy from 96.75% to 97.45%, on top of SWA, a very high figure for residual networks on this dataset. We obtain an improvement in the order of ~1% on CIFAR-100 with this model.\"}",
"{\"title\": \"Joint reply to all reviewers\", \"comment\": \"We thank all four reviewers for their efforts and their constructive feedback. Working these in has substantially improved and strengthened our paper.\\n\\nMost importantly, the revised version now addresses the most critical point raised by the reviews and highlights the substantial performance increase that our method can achieve. We now report significantly higher performance increases on larger models, where regularization is essential.\\n\\nIn particular, on a WRN 28-14, we increase accuracy from 96.75% (SWA) to 97.45% (Late-phase+SWA) on CIFAR-10, and from 84.01% (SWA) to 85.00% (Late-phase+SWA) on CIFAR-100. In absolute terms, these are very high accuracies, placing us above all available results for WRNs (without extra training data) in PWC [1] for CIFAR-100; on CIFAR-10 [2] we are seconded only by a result using the deeper WRN 40-10. Furthermore, these results show that our method can be applied complementary to SWA, currently one of the strongest methods for improving generalization in neural networks. On ImageNet, we now increase accuracy from 78.37% to 78.77% on a ResNet-152 (Table 5). This improvement is brought by solely fine-tuning an existing, pretrained model.\\n\\nIn addition, we now compare our method to well-established alternatives that mitigate the computational and memory costs of a deep ensemble: dropout, MC-dropout, and BatchEnsemble. Our method results in stronger performance across all cases considered, including out-of-distribution detection problems, and it is arguably as easy to implement as these. Additionally, we now show that when retaining our late-phase ensemble at the end of training, out-of-distribution scores greatly improve. To facilitate its adoption, we will release to the community a plug-and-play PyTorch module for BatchNorm layers with the final paper.\\n\\nFinally, we show that our main results can be obtained with a single hyperparameter, the initialization time $T_0$, which is used across CIFAR-10/100. This hyperparameter is robust: the main requirement is to set it to a late-enough value, as shown in our new, finer sensitivity analysis.\\n\\nTo conclude -- in light of the new framing of our results (putting them into perspective to other methods), the additional baselines and several other experiments as well as performance improvements we would kindly ask the reviewers to reevaluate their ratings if they agree that the paper has improved.\\n\\nIn the text below, we answer in detail to the reviewer\\u2019s comments and we will remain responsive to any further questions the reviewers may have.\\n\\n[1] https://paperswithcode.com/sota/image-classification-on-cifar-100 \\\\\\n[2] https://paperswithcode.com/sota/image-classification-on-cifar-10\"}",
"{\"title\": \"Official review\", \"review\": [\"### Summary\", \"The authors propose late-phase weights, a method of updating the weights near the end of training via a splitting and ensembling mechanism. They analyze the benefits in the noisy quadratic setting. The method improves validation performance on a range of image recognition tasks and on enwiki8.\", \"### Comments\", \"The weight interaction functions $h$ should be more explicitly defined rather than just described in text.\", \"The paper is overall well written and flows smoothly.\", \"I think there should be more discussion on the choice of $T_0$. For example, in table 1, why does SGD perform worse when $T_0=0$? It would be good to get a sense of robustness to this hyperparameter.\", \"Good results on CIFAR. Late-phase weights are shown to boost performance over SGD and to be complementary with SWA. There are some benefits in the OOD setting as well.\", \"### Recommendation / Justification\", \"I vote to accept the paper. The idea is interesting, well-motivated, and seems straightforward to incorporate into existing pipelines. However, the improvements seems modest in some settings (e.g. ImageNet) and for the best performance, it seems like we should still stick to Deep Ensembles.\", \"### Questions\", \"On the ImageNet experiments, what is the validation accuracy of the pre-trained model?\", \"Can you comment on the computaional and memory complexity of your algorithm versus vanilla SGD?\", \"In the comparisons between late phase weights and SGD, do both algorithms consume the same amount of data? If so, this would be good to mention.\", \"Could the entire network be treated as \\\"late-phase weights\\\"? Would this help performance?\", \"### Minor comments\", \"I would consider alluding to possible choices of the weight interaction functions $h$ when it is first introduced at the start of 2.1.\", \"In Algorithm 1: How does the loss function consume three inputs? This is different from when it is initially described.\", \"It's a bit unclear what is being compared in Figure 2.\", \"(increased score from 6 to 7)\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #2\", \"review\": \"Summary:\\n\\nThe paper proposes a method to improve solutions found by SGD by ensembling subsets of weights in late-phase. A family of low-dimensional late-phase methods are analyzed and shown to improve generalization in CIFAR-10/100, ImageNet and enwik8. Authors also analyze the method in more tractable noisy quadratic settings. \\n\\nContribution of the authors is that rather obtaining ensemble they utilize efficient ensemble to guide SGD training and ultimately obtain a single model.\", \"reason_for_score\": \"While the paper discusses efficient ways of utilizing late-phase weight ensemble and improving SGD training, the demonstrated benefit is not significant enough for practitioners to pursue the method. Without strong practical application potential, merit of the proposed method is weak since it does not obviously elucidate some aspects of neural network training.\", \"pros\": \"The paper is clearly written and easy to understand the proposed method is. It is well structured that helps to improve the clarity. \\n\\nProposed method tackles a significant problem in the standard ensemble method in which both training/inference computation can be quite costly. The paper\\u2019s method only ensembles subset of weights therefore added training cost is minimal and since inference is done on averaged weight, it becomes essentially a single model.\\n\\nAmong various late-phase schemes, BatchNorm late-phase seems to work well which is widely used among vision models so easily applicable. Also since late-phase can be applied post-pretraining, it can be used to improve pre-trained models. \\n\\nAs far as I can tell various experimental conditions are very well controlled and thoughtfully designed.\", \"cons\": \"The idea of weight averaging is not so novel as duly noted by the authors.\\n\\nMain question arises for the paper is whether the proposed method is worth the effort. While all experiments show that the proposed method improves the baseline somewhat, deep ensemble baselines remain strong. Also quoted difference between methods does not mean statistically significant effect (see Vincent Vanhoucke\\u2019s article on reporting significant figures https://towardsdatascience.com/digit-significance-in-machine-learning-dea05dd6b85b). According to this article, results reported in Table 1, CIFAR-10 in WRN, a significant figure with a 10k test set should be around 0.2% and differences between different methods are at best marginal. This can be applied to most tables and except for Deep Ensemble\\u2019s improvement other differences are not very significant.\\n\\t\\nI wonder as discussed by the authors, this is due to mostly the benefit of ensembles is\\nthrough incorporating different modes as argued in [Fort et al., 2020] rather than a single mode. I imagine a single mode ensemble could be beneficial when variance within the mode is large, however for models considered by the authors seem to have small model variance which minimizes effect of technique utilizing single mode. \\n\\nWhile \\\\sigma_0 and T_0 are hyperparameters of the algorithm, no good way to determine it is explained. \\n\\nThe role of section 3.1 is not clear. For one thing, the legend in Figure 1 is confusing where the role of non-integer K is mysterious to me. I would suggest clarifying what the message of the section would be in context of understanding late-phase weight models.\", \"nits_and_additional_feedback\": \"Anonymized link is neither there in the main paper or included as supplementary material. If the authors intended to include the code, this is a note that code can not be found to the reviewers.\\n\\nFor models that do not use BatchNorm, I believe most interest to practitioners would be using Transformer based models. I wonder if rank-1 late-phase or LayerNorm late-phase would show improvements in this case. \\n\\nWas \\u201cLate-phase classification layers\\u201d ever evaluated or discussed in the main paper? I find some discussion on the appendix but seem to be missing in the main text. \\n\\n---\\nI thank the authors for their hard work addressing issues raised by the reviewers.\\n\\nAuthors have answered many issues pointed out (by improved performance and showing robustness to hyperparameters) and I've increased my score from 5 to 6, and support accepting the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"review\", \"review\": \"This work suggests a variant of ensembling that is more compute-efficient. Specifically, it involves forking an ensemble only in the late stage of training, and forming this ensemble via a \\\"low-dimentional\\\" family. That is, instead of maintaining independent networks, maintain only \\\"low-rank\\\"-style perturbations of the base network (for various instanciations of \\\"low-rank\\\").\\nThe experimental results are somewhat limited, but appear to be competitive with current efficient-ensembling approaches like SWA/SWAG. The absolute improvement of this method is not very large (<0.3% on CIFAR, <0.2% on imagenet), and there is a large gap to Deep Ensembles. I weakly recommend acceptance, because the method appears promising for future work, and the experiments seem correct. \\n\\nThere is also a theory section included, though I am generally unconvinced by results in such simple toy examples.\\n(such settings can usually be contrived to exhibit any desired behavior)\", \"weaknesses\": [\"The experimental section would be greatly strengthened by additional experiments for different models and settings. There are only 2 architectures tested on CIFAR-10, for example. It would also be informative to see the performance of these methods in \\\"harder\\\" settings -- for example, CIFAR-10 with fewer train samples.\", \"The OOD uncertainty results could be expanded. Uncertainty estimation and robustness are some of the most relevant practical uses of ensemble methods, so it is especially important to evaluate ensembles in this context. Currently aggregate results are shown in Table 4, but it would be good to explicitly see, for example: how the performance of this method degrades with increasing CIFAR-10C corruption severity, as opposed to Deep Ensembles. Also, reporting the Mean Corruption Error (mCE) for each dataset individually will allow standard comparison to prior methods.\"], \"comments_which_do_not_affect_the_score\": \"It seems that starting the ensembling at a \\\"late phase\\\" in training is the main contribution of this work. This could be applied to any ensemble method, and you propose several explicit instantiations. It could help to focus the writing in terms of this contribution, and also to further investigate the role of T0 (the time at which ensembling starts).\\n\\n---\", \"edit_after_rebuttal\": \"Increased score from 6 to 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"I think both the methodology and the writing need to be improved.\", \"review\": \"To improve the generalization performance of SGD methods,\\nthis paper proposes to use an efficient ensemble-like approach\\u00a0\\nwhich computes an average of an ensemble of SGD weights\\nwhen retrained from some late-phase of SGD dynamics.\\u00a0\\nThis idea is different to most recent ensemble-based approaches which\\u00a0\\naim to average the predictions of the models.\\u00a0\\n\\nThe paper focuses on some specific layers of neural networks\\u00a0\\nin order to apply the late-phase training.\\u00a0\\nThe batch normalization layers are shown to be\\u00a0\\nsimple and effective. Some other layers are also analyzed,\\u00a0\\nincluding a recently introduced rank-1 multiplicative matrix\\u00a0\\nweights idea for full-connected layers.\\u00a0\\nSection 3 presents the numerical results and show that the generalization of SGD\\u00a0\\nis more-or-less improved on various benchmarks.\\nExplanation of why the generalization is improved in relation with the flatness\\nof energy landscape is also discussed. \\u00a0\\n\\nI find that this approach is quite sensitive the choice of the hyper-parameters,\\u00a0\\nsuch as the beginning of the late-phase T0, and the noise perturbation sigma0.\\u00a0\\nIt is written in Section 2.1 that in practice \\u2026 sigma0>0 yields a set of models \\u2026\\nthis results in improved final generalization. However, in the result of ImageNet in Section 3.3,\\u00a0\\nthe sigma0 equals to 0. Thus, it is not conclusive that sigma0>0 is better.\\u00a0\\nAs the improvement in Section 3.3 seems marginal compared to the baseline and the\\u00a0\\nstandard deviation, it thus does not fully support the effectiveness of the batch normalization layers.\\u00a0\\nI would recommend using some other dataset or models,\\u00a0\\nbut with a more consistent set of hyper-parameters.\\u00a0\\n\\nIn terms of writing, I would recommend to write out\\nthe full algorithm of Alg. 1 or at least in the Appendix,\\u00a0\\nincluding the variant of the SGD momentum and Adam.\\u00a0\\nThe SWA is also worth writing out clearly, which is not clear to the reader.\\u00a0\\nIs the DeepEnsemble result in Table 1 from SGD or SWA?\\u00a0\\nThis is not clear from the text.\\n\\nOverall, I think both the methodology and the writing need to be improved. \\n\\n##\\nThe revisions made by the authors have addressed all my concerns.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
6M4c3WegNtX | Neural Ensemble Search for Uncertainty Estimation and Dataset Shift | [
"Sheheryar Zaidi",
"Arber Zela",
"Thomas Elsken",
"Chris Holmes",
"Frank Hutter",
"Yee Whye Teh"
] | Ensembles of neural networks achieve superior performance compared to stand-alone networks not only in terms of predictive performance, but also uncertainty calibration and robustness to dataset shift. Diversity among networks is believed to be key for building strong ensembles, but typical approaches, such as \emph{deep ensembles}, only ensemble different weight vectors of a fixed architecture. Instead, we propose two methods for constructing ensembles to exploit diversity among networks with \emph{varying} architectures. We find that the resulting ensembles are indeed more diverse and also exhibit better uncertainty calibration, predictive performance and robustness to dataset shift in comparison with deep ensembles on a variety of classification tasks. | [
"uncertainty estimation",
"deep ensemble",
"dataset shift",
"robustness",
"uncertainty calibration"
] | Reject | https://openreview.net/pdf?id=6M4c3WegNtX | https://openreview.net/forum?id=6M4c3WegNtX | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"PUja1FnkWKz",
"yr0WUlZl2ar",
"ML0WgF_XsS",
"AQlAjscXtzH",
"fFm88ug-8AQ",
"lBtSKfdXXQw",
"aWTFLqT-zdY",
"cC1VfAedGT",
"yp0frOsRYG",
"PhusmWZoujU",
"nzfVaztAQVX",
"Vu6k8QOjWkK",
"DhNzkTAPqhx",
"3qMs3dBfyrG",
"xLFjvlnHzfs",
"YS3Dmkchpti"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040487613,
1606305331086,
1606236696571,
1606155252663,
1606153376282,
1606136673509,
1605670936452,
1605670458771,
1605670322336,
1605670099523,
1605669830102,
1605669796991,
1604596165936,
1604085949583,
1603821451913,
1603778688860
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3619/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3619/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3619/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3619/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3619/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3619/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3619/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3619/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3619/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3619/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3619/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3619/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3619/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3619/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3619/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes a new method to perform uncertainty estimation based on ensembles with diverse network architecture.\", \"the_reviewers_raised_a_few_concerns\": [\"Although it is ok not to compare with (Tao, 2019), an active analytical comparison with baselines for ensemble diversification should not be overlooked e.g. (Yao et al, 2008), (Olson et al, 2019), (Khurana et al, 2018), etc.\", \"The approach presented in this paper is not novel in the general idea of searching for or diversifying ensembles\", \"The reviewers agree that diversity methods can be implemented on top of NES, but it is unclear whether NES+diversity methods would give more over just diversity methods; so either measuring NES+diversity methods or a direct comparison of NES and diversity methods is important.\", \"We encourage the authors address these issues in the next revision.\"]}",
"{\"title\": \"Reply regarding baselines\", \"comment\": \"Thank you for your reply! While we agree that adding explicit diversity regularizers on top of NES is an interesting avenue to explore, we emphasize that this lies outside the scope of the questions we aimed to explore in our work. To answer our original question of the impact of varying architectures in deep ensembles, we compared to various variants of deep ensembles and ensembles with other varying hyperparameters (deep ensembles with different SOTA base learner architectures, with/without ensemble selection over initializations, ensembles with varying non-architectural hyperparameters, ensembles varying only in terms of depth/width). In discussion with other reviewers, we agreed that some further baselines were important to compare to, and we added those to Appendices C.3 and C.4. There are many baselines one can compare to, but we chose ones that provide the most insight into our original question. NES does not alter the loss function or include explicit diversity regularization such as NCL [1] and MOD [2] do. Also note that [3] only compare to deep ensembles and a variant of their method of anchored ensembles (i.e. \\u201cregularized\\u201d NNs which anchor all base learners to 0).\\n\\n-- References --\\n\\n[1] \\u201cEnsemble learning via negative correlation\\u201d by Y. Liu, X. Yao (Neural Networks 1999) \\n[2] \\u201cMaximizing Overall Diversity for Improved Uncertainty Estimates in Deep Ensembles\\u201d by S Jain, G Liu, DK Gifford (AAAI 2020)\\n[3] \\u201cUncertainty in Neural Networks: Approximately Bayesian Ensembling\\u201d by Pearce et al (AISTATS 2020) [4] \\u201cPitfalls of in-domain uncertainty estimation and ensembling in deep learning\\u201d by Ashukha et al. (ICLR 2020)\"}",
"{\"title\": \"Further comments on the experiment in Appendix C.3\", \"comment\": \"Thank you! We are glad you agree that the experiment in Appendix C.3 indicates varying the architectures in crucial for the improvements in NES.\\n\\n**\\u201cforwardselection procedure does the most of job even when the architecture is fixed\\u201d**: \\nActually, Appendix C.3 suggests the opposite: the majority of gain in performance is due to ensembling and not ensemble selection (i.e. ForwardSelect) for the baselines \\u201cDeepEns + ES\\u201d. More specifically, Figure 22 shows that single model NLL for the DARTS architecture is on average 1.8-1.82. A deep ensemble (i.e. without ensemble selection) of (say) size 10, brings this down to around 1.59, and adding ensemble selection on top of that reduces the loss to around 1.57. While it is beneficial to perform ensemble selection, the additional gain it offers over ensembling is relatively small. (The same applies to DeepEns + ES (RS/AmoebaNet).) Separately, also note that NES-RE has a lower ensemble NLL even though DeepEns + ES (RS) and NES-RE have very similar average base learner performance, which reaffirms the importance of varying architectures. Does this clarify your concern?\"}",
"{\"title\": \"Re:Author Response\", \"comment\": \"Thank you for the response. I agree that diversity methods can be implemented on top of NES but I don't think it's clear whether NES+diversity methods would give more over just diversity methods so I think either measuring NES+diversity methods or a direct comparison of NES and diversity methods is important.\"}",
"{\"title\": \"Thanks for the experiment\", \"comment\": \"Thanks for the experiment! it shows that architecture search really matters although it seems that forwardselection procedure does the most of job even when the architecture is fixed. I am raising my score to 5.\"}",
"{\"title\": \"Reminder to reviewers before the discussion period ends\", \"comment\": \"If the reviewers have any questions, generally about the latest version of our paper or our individual responses, we are very happy to answer those before the end of the discussion period. We hope we have addressed most reviewer concerns.\"}",
"{\"title\": \"Reply to all reviewers about changes\", \"comment\": \"Thank you to all reviewers for their feedback and important suggestions. We have posted individual responses. We highlight the main changes made to our paper here:\\n\\n1. We have added a new baseline in Appendix C.3 comparing NES to deep ensembles with ForwardSelect applied over a pool of random initializations of the fixed architecture, finding that NES also outperforms the resulting ensembles. (Suggested by *AnonReviewer5*)\\n\\n2. We have added Appendix C.4 where we compare NES to ensembles with other hyperparameters being varied, also finding that NES typically outperforms. (Suggested by *AnonReviewer4*)\\n\\n3. We have added predictive disagreement for the two ensembles with varying vs. fixed architectures in Section 3.2, showing higher diversity in an ensemble with varying architectures. (Suggested by *AnonReviewer4*)\\n\\n4. We have updated our related work section with suggestions from the reviewers.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"Many thanks for taking the time to read our paper and for your feedback. Below we address the reviewer\\u2019s main concerns:\\n\\n1. **\\\"While the novelty is incremental, I like the idea in general.\\\"**: Thanks for your kind words! Regarding novelty: to our knowledge, ensembles with varying architectures have not previously been considered in the context of uncertainty calibration and dataset shift, and our work is the first to utilize ideas from NAS to build algorithms for automatically selecting the architectures and to show that this yields improvements in an empirical evaluation using state-of-the-art NAS search spaces.\\n\\n2. **\\\"My main objection is that critical baselines are not compared with. Ensemble diversity is a well explored topic with multiple easy to implement regularizations to increase diversity [1, 2, 3] and several more that should be compared with.\\\"**: Thank you for pointing us to these papers (added to our related work!). We do believe that we use meaningful baselines, which was also stated by AnonReviewer4: *\\\"The authors made comparisons to several reasonable baselines.\\\"* A central aim of our work is to investigate the impact of varying architectures in an ensemble from the perspective of uncertainty estimation and dataset shift; we chose our baselines in accordance with this aim. Deep ensembles which keep the architecture fixed are therefore a natural baseline where the architecture is optimized and chosen from the same search space over which NES operates (e.g. DARTS and AmoebaNet architectures). Crucially, the training pipeline is identical for base learners in NES and deep ensembles. Diversity regularizing methods such as [1], [2] and [3] can be implemented on top of NES ensembles (i.e. train the selected architectures with diversity regularizing penalties). Also, note that work such as [4] have compared deep ensembles to multiple ensembling techniques (including snapshot ensembles, fast geometric ensembling, SWA-Gaussian, cyclical SGLD and dropout), showing that *\\u201cmost of the popular ensembling techniques require averaging predictions across dozens of samples (members of an ensemble), yet are essentially equivalent to an ensemble of only few independently trained models.\\u201d* Based on the results of that paper, we believe deep ensembles form a difficult-to-beat baseline.\\n\\n-- References --\\n[1] \\u201cMaximizing Overall Diversity for Improved Uncertainty Estimates in Deep Ensembles\\u201d by S Jain, G Liu, DK Gifford (AAAI 2020)\\n[2] \\u201cEnsemble learning via negative correlation\\u201d by Y. Liu, X. Yao (Neural Networks 1999)\\n[3] \\u201cUncertainty in Neural Networks: Approximately Bayesian Ensembling\\u201d by Pearce et al (AISTATS 2020)\\n[4] \\u201cPitfalls of in-domain uncertainty estimation and ensembling in deep learning\\u201d by Ashukha et al. (ICLR 2020)\"}",
"{\"title\": \"Response to AnonReviewer5\", \"comment\": \"Thank you for your feedback. Below we address the reviewer\\u2019s main concern.\\n\\n1. **\\\"My main point for the criticism is the lack of experiment which I find to be crucially important namely the comparison against deep ensemble of DNNs with same architecture to which ForwardSelect procedure has been applied. Train P DNNs with same architecture then perform ForwardSelect routine to take the best K of them and compare your method with such deep ensemble. Currently the authors only compare their method with deep ensembles to which no special selection procedure was applied. This causes bias and it is not clear whether the improvement in NES is due to the usage of different architectures or due to the selection procedure which encourages diversity in resulting ensemble.\\\"**: Thank you for this suggestion! We agree this is an important study to gain insight into the improvement from NES, and we have added this ablation to our work in Appendix C.3. As you described, we compare to additional deep ensemble baselines (called \\u201cDeepEns + ES\\u201d) which select the ensemble from a pool of trained random initializations of a fixed, optimized architecture. Our results show that NES algorithms continue to outperform these baselines. Also, note that the cost of \\u201cDeepEns + ES\\u201d baselines is substantially higher at the ensembling stage than usual deep ensembles, as we now train a pool of random initializations instead of just M random initializations (M = ensemble size). In fact, the total cost ends up becoming larger than NES, since for DeepEns + ES we first need to find a good architecture and then train it multiple times to form a pool (as in NES). Appendix C.3 contains further discussion on these points. \\n\\nWe hope the reviewer will consider increasing their score, as we have added the experiment they suggested. We welcome any questions.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thank you for your comments and the references. We have seen similar remarks previously and a number of them are addressed in our work; we refer to the appropriate sections.\\n\\n1. **\\u201censemble selection... is essentially Caruana et al. (2004)\\u201d**: We cited Caruana et al. (2004) in explaining our choice of forward selection on pg. 4 and in the related work section, and we do not claim forward selection to be a contribution.\\n\\n2. **\\u201cvaguely described evolutionary method, lacking details or analysis\\u201d**: Can you please specify what you felt is lacking in our description of NES-RE? We have described NES-RE in Section 4.2 and Figure 2. We have also provided pseudocode in Algorithm 2, implementation and scalability details in Appendix B.4 and code in the supplementary material.\\n\\n3. **\\u201cDo you randomly select from a set of seed architectures or randomly create\\u201d**: The architecture search spaces and how we randomly sample architectures are described in Appendices B.1 and B.3. In short, architectures in the DARTS search space are specified by cells which are DAGs where each edge represents an operation (e.g. max pooling, separable convolution). We sample both the structure of the cell and the operations at each edge.\\n\\n4. **\\u201censemble methodology of unweighted averaging is fairly naive\\u201d**: Despite it being simple and \\u201cna\\u00efve\\u201d, unweighted averaging is used in deep ensembles (and works well), and in order to isolate and evaluate the impact of varying architectures, we also used unweighted averaging. Note that multiple popular neural network ensembling techniques also use unweighted averaging (e.g. deep ensembles, snapshot ensembles, fast geometric ensembling etc.) Nonetheless, more sophisticated ensemble combination methods could readily be used with NES.\\n\\n5. **\\u201cThe proposed method can lead to overfitting\\u201d**: While it is difficult to guarantee no overfitting, in our experiments, we did not experience evidence for overfitting during ensemble selection despite using a fixed validation set. This is evident in Figures 4, 7, 17, since test performance of NES improves with increasing pool size/budget.\\n\\n6. **\\u201cRegarding evaluation -- Can you explain how that is addressed? Have you evaluated your method on a broader variety of datasets? Can you confirm that the test data used for search/optimization is different than the one used for measuring reported performance?\\u201d**: Section 5 evaluates NES on 5 datasets (FMNIST, CIFAR-10, CIFAR-100, ImageNet-16-120 and Tiny ImageNet) and 2 architecture search spaces (DARTS search space and NAS-Bench-201), using 3 metrics (NLL, classification error, ECE) including when there is test-time dataset shift. Regarding the use of test data, yes, \\u201cunless stated otherwise, all evaluations are on the test dataset\\u201d (section 5), and Algorithms 1-2 only use D_train, D_val. Using suggestions from reviewers, we have also added comparisons to new baselines in Appendices C.3 and C.4. Overall, we believe that NES\\u2019 empirical performance has been extensively evaluated, as also noted by AnonReviewer4: *\\u201cThe authors made comparisons to several reasonable baselines. The improvement of the proposed NES-RE/RS method over fixed architecture ensembles is consistent and significant.\\u201d*\\n\\n7. **\\u201cDid you consider comparing it to other methods such as Tao 2019?\\u201d**: Thank you for the reference. We have not compared to Tao 2019, because this seems akin to a boosting-based approach which requires training base learners sequentially (at least, partially). First, this is time-consuming for large networks and ensemble sizes (e.g. size 30 in our experiments) in contrast with randomization-based approaches for ensembling, such as NES and deep ensembles, which are readily parallelized. We have also been unable to find code implementation for Tao 2019. Second, in the context of predictive uncertainty and dataset shift, ensemble diversity due to randomization-based approaches reduces overconfident predictions by individual baselearners. It is unclear this benefit would be retained by boosting-based approaches which optimize solely for predictive performance on in-distribution data.\"}",
"{\"title\": \"Response to AnonReviewer4 (Part 2/2)\", \"comment\": \"4. **\\u201cThe authors didn\\u2019t mention whether the [shifted validation set] protocol is applied to baselines (DeepEns). This leads to a question that is the improvement on out-of-distribution calibration coming from NES or shifted validation set. I consider this is a minor issue because Figure 21 in the appendix also shows that the clear improvement without shifted validation data.\\u201d**: Our apologies that this was unclear. We now explain the use of the shifted validation data for the baselines.\\nFirst, for evaluations on test data with shift (e.g. Figures 3b and 4b), for DeepEns (RS) there are many architectures we can choose a fixed one from, and we indeed choose this to minimize loss on the shifted validation set from a random sample, i.e. random search (see paragraph on NLL in Section 5). However, for DeepEns (DARTS/AmoebaNet), this is different, since there we just use the fixed architecture found in the DARTS paper / in the AmoebaNet paper. So, in this case, the same architecture is used when evaluating on test data with/without shift. Second, as you correctly point out, when evaluating on test data without shift (e.g. Figures 3a and 4a), none of the methods utilize a shifted validation set and we see a \\u201cclear improvement\\u201d of NES over baselines. Third, in response to AnonReviewer5, during the author response phase, we added a new baseline DeepEns+ES (DeepEns + Ensemble Selection) in which all deep ensembles utilize shifted validation data (see Appendix C.3). In this case, we exactly follow the protocol used for NES: ensemble selection uses shifted validation data when evaluating on shifted test data. In particular, also the DARTS/AmoebaNet deep ensembles in this baseline now *do* use the shifted validation data. We nevertheless find NES continues to outperform this baseline. \\n\\n5. **\\u201cTable 1 shows that larger ensemble size leads to worse ensembling performance. This is against our general intuition in deep ensembles where more ensemble members lead to better performance. My guess is more ensemble members leads to optimization difficulties in NES. I expect to see more discussion on this observation.\\u201d**: This is due to a misunderstanding. Please note that the two sub-tables in Table 1 show results on different search spaces using different model sizes and training routines (sorry, we have fixed the captions to make this clear!), so they are incomparable. As you point out, larger ensembles do indeed lead to better performance, and this is consistently true in our detailed experiments with growing ensemble size as shown in Figure 3. \\n\\nWe hope that the clarification of the simple misunderstanding in 5., comparison with respect to predictive disagreement and the addition of the additional baselines might convince you to increase your score. We are looking forward to any additional questions that may have remained unanswered.\\n\\n-- References --\\n[1] Wenzel et al. Hyperparameter Ensembles for Robustness and Uncertainty Quantification. In NeurIPS 2020\"}",
"{\"title\": \"Response to AnonReviewer4 (Part 1/2)\", \"comment\": \"Many thanks for your detailed, helpful review and for appreciating our work. We address your concerns below by incorporating your suggestions into our work and hope that you will consider updating your score:\\n\\n1. **\\u201cTo be more convincing, [Figure 1] can be supplemented with the predictive disagreement on the test set\\u201d**: Thanks for this suggestion! We have added a comparison of the predictive disagreement in an ensemble with fixed architecture vs an ensemble with varying architectures in the last paragraph of Section 3.2. The results (11.88% vs. 10.51% disagreement for varying vs. fixed architecture ensembles respectively) show that varying the architecture also yields higher predictive disagreement, i.e. higher diversity.\\n\\n2. **\\u201cThe baselines considered in this paper only include ensembles with a fixed architecture. It would be more convincing if the authors can include other baselines which include ensembles with different architectures (without neural architecture search). For example, one naive baseline would be ensembling DeepEns(Optimal) with different depths (fully trained independently). This highlights the need for neural architecture search. It also helps to understand how much diversity in architecture (among ensemble members) we need in order to achieve desired diversity in ensemble predictions. Additionally, it is encouraged to compare to hyper-parameter ensembles. This uncovers the question of which axis (hyper-parameter & architectures) is more effective in promoting an ensemble\\u2019s performance.\\u201d**: Thank you for this suggestion! We agree that both baselines you suggested are reasonable to compare, therefore we have added Appendix C.4 comparing NES to them. Note that hyperparameter ensembles [1] is concurrent work to ours (ours posted on arXiv a week earlier), and while both papers show that varying particular hyperparameters (note architecture is a hyperparameter) is beneficial, the question of which hyperparameters one should vary remains open and is left for future work. Nonetheless, we perform a comparison to two new baselines in Appendix C.4: 1. HyperEns: ensembles with a fixed, optimized architecture but varying learning rates and L2 regularization strengths, and 2. NES-RS (depth, width): ensembles with architectures varying only in terms of width and depth, keeping the cell fixed. The results show that NES tends to improve upon these baselines. Please refer to Appendix C.4 for details. \\n\\n3. **\\u201cAnother missing part in this paper is the cost analysis of NES\\u201d**: Thanks for this important comment! We have added a paragraph in Section 5 comparing the computational cost of NES vs. baselines, explaining why NES is not necessarily more costly than DeepEns. In summary, DeepEns baselines have two costs (which are subsumed into one cost in NES as explained below): finding an optimized, fixed base learner architecture and training M initializations of it. While both steps incur a cost, the former step can be extremely costly (e.g. where it is not clear what base learner architecture is best given a set of choices, requiring the use of random search or a typical NAS algorithm to find an optimized architecture). A concrete example from our paper is the deep ensemble made using AmoebaNet, which is an architecture found using a regularized evolution run that required 3150 GPU days. As you mention, \\u201cas far as I know, there is no guidance or automatic mechanism in ensembling neural networks with different architectures\\u201d; NES combines the architecture search and ensembling components, with its only cost being the training of K architectures to form the pool. Also, see Table 3 regarding costs in Appendix C.3.\"}",
"{\"title\": \"Simple and interesting method but the important experiment is missing\", \"review\": \"The paper suggests a new approach to the construction of ensembles of deep neural networks (DNN). Unlike previous methods which usually deal with multiple DNNs of same structure authors propose to form an ensemble of networks with different architecture. The main claim is that using diverse architectures increases diversity and hence the quality of predictions. To find the best architectures they use methodology inspired by neural architecture search (NAS) in particular random search and regularized evolution. The method for neural ensemble search (NES) is algorithmically simple although computationally hard. On several experiments the authors show NES outperforms standard deep ensembles formed from networks with same (even optimal) structure both in terms of test NLL and in terms of uncertainty estimation under domain shift.\\n\\nPros.\\nNice idea\\nSimple algorithm\\n\\n\\nCons.\\nMy main point for the criticism is the lack of experiment which I find to be crucially important namely the comparison aganist deep ensemble of DNNs with same architecture to which ForwardSelect procedure has been applied. Train P DNNs with same architecture then perform ForwardSelect routine to take the best K of them and compare your method with such deep ensemble. Currently the authors only compare their method with deep ensembles to which no special selection procedure was applied. This causes bias and it is not clear whether the improvement in NES is due to the usage of different architectures or due to the selection procedure which encourages diversity in resulting ensemble.\\n\\nP.S. Please correct me if I misunderstood the last point. I have read the corresponding part twice and found no evidence that you're using ForwardSelection when analysing the performance of ensembles of DNNs with same architecture. \\n\\n====UPDATE===\\nMy concerns were partly addressed in author's response so I have raised my score to 5.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea but more experimentation needed\", \"review\": \"The paper explores whether one can use Architecture Search to enhance ensemble diversity. They start with the observation that embeddings generated by different architectures (for multiple different initialization per architecture) are well separated from each other. They then try out a couple of architecture search methods to find ensembles with diverse architectures that minimize the loss.\\n\\nWhile the novelty is incremental, I like the idea in general. My main objection is that critical baselines are not compared with. Ensemble diversity is a well explored topic with multiple easy to implement regularizations to increase diversity [1, 2, 3] and several more that should be compared with.\\n\\n[1] \\u201cMaximizing Overall Diversity for Improved Uncertainty Estimates in Deep Ensembles\\u201d by S Jain, G Liu, DK Gifford (AAAI 2020)\\n[2] \\u201cEnsemble learning via negative correlation\\u201d by Y. Liu, X. Yao (Neural Networks 1999)\\n[3] \\u201cUncertainty in Neural Networks: Approximately Bayesian Ensembling\\u201d by Pearce et al (AISTATS 2020)\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Important problem but the solution lacks novelty\", \"review\": \"The paper proposes creating diverse ensembles of neural networks using an evolutionary method for finding base learners with high performance as well as mutual diversity. The selected base learners are then aggregated for an ensemble using a known method for ensemble selection. The paper is generally well written and addresses a relevant problem of constructing ensembles while training neural networks instead of building models first, independently, and later constructing ensembles.\\n\\nHaving said that, the paper lacks a significant contribution. The second phase (Ensemble Selection) of the proposed method is essentially the algorithm from Rich Caruana et al. 2004. The first phase of Pool Building suggests either a random generation or alternatively a vaguely described evolutionary method, lacking details or analysis. It is not clear how exactly is the random initialization of architectures performed. Do you randomly select from a set of seed architectures or randomly create (i.e., random number of layers, random number of units, random activation functions, random initial weights, etc.)?\\n\\nGrowing from a random neural architectures to multiple highly performing (besides being mutually diverse) through single permutations upon model training and evaluation seems like an expensive process. An evolutionary approach in such a manner seems in efficient. Can you report the time taken for some of the reported cases in the evaluation? The ensemble methodology of unweighted averaging is fairly naive. What was the reason to select this one particularly? \\n\\nContribution #1 (page 2) isn't really a contribution. It is common knowledge amongst practitioners. The proposed method can lead to overfitting because the search seems to be based on a fixed set for evaluation. \\n\\nRegarding evaluation -- Can you explain how that is addressed? Have you evaluated your method on a broader variety of datasets? Can you confirm that the test data used for search/optimization is different than the one used for measuring reported performance? Did you consider comparing it other methods such as Tao 2019 (mentioned below) ?\\n\\nThis paper can improve its literature survey by citing more directly relevant work in ensemble search using diversification. Here are few examples of more sophisticated ensemble evolution work, not necessarily for a DL base learner, but relevant nonetheless:\\n-Bhowan, et al. 2013. Evolving diverse ensembles using genetic programming for classification with unbalanced data. Trans. Evol. Comp\\n-Khurana et al. 2018. Ensembles with Automated Feature Engineering. AutoML at ICML.\\n-Olson et al. 2019. TPOT: A Tree-Based Pipeline Optimization Tool for Automating Machine Learning. Automated ML. \\n-Tao, 2019. Deep Neural Network Ensembles. Machine Learning, Optimization, and Data Science. \\n-Yao et al., 2008. Evolving artificial neural network ensembles, in IEEE Computational Intelligence Magazine\\n\\nOverall, it is a good problem, but this paper falls well short of the threshold.\", \"update\": \"I thank the authors for their response. Some justifications are provided and for that I will change my score. Overall, the paper still needs work.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"A promising solution in an interesting topic. Some modifications are needed to be more convincing.\", \"review\": \"The authors addressed my concerns in the rebuttal. I have raised my score.\", \"summary\": \"This paper combines AutoML techniques and deep ensembles to improve the ensemble diversity so that it improves the entire ensemble quality in both in- and out-of-distribution dataset. The authors made a study on two possible AutoML methods which can be combined with ensembles: 1. Random search & 2. Regularized mutation. The empirical results showed that the proposed NES methods outperform commonly selected baselines.\", \"pros\": \"How to improve ensemble diversity is one of the core topics on the way to better ensemble performance (in terms of both accuracy and uncertainty metrics). Many previous research works focused on 1. Build efficient ensembles while remaining reasonable diversity; 2. Improve ensemble diversity by exploring the hyper-parameter space. In practice, it is standard to ensemble neural networks with different depths or widths to improve diversity and hence the ensemble performance. However, as far as I know, there is no guidance or automatic mechanism in ensembling neural networks with different architectures. Thus, the problem this paper aims to tackle is significant and it will benefit the research community.\\n\\nThe authors did a self-contained introduction on ensembles & uncertainty and AutoML. The coverage on mutation AutoML is limited but this is still reasonable due to the page constraint. The motivation of why we want to combine AutoML and deep ensembles is clearly stated in section 3.2. Figure 1 demonstrates the effectiveness of various architectures in promoting ensemble diversity.\\n\\nThe empirical evaluation ranges across CIFAR dataset and ImageNet. It also includes calibration performance under dataset shift (uncertainty estimation on out-of-distribution dataset), which is the common benchmark to evaluate an ensemble's performance. The authors made comparisons to several reasonable baselines. The improvement of the proposed NES-RE/RS method over fixed architecture ensembles is consistent and significant.\", \"cons\": \"Figure 1 demonstrates the motivation behind this work. To be more convincing, it can be supplemented with the predictive disagreement on the testset or the averaged KL divergence between the predictive distribution among ensemble members. Moreover, the figure compares diversity between ensembles with different architectures and ensembles with random seeds. For a more comprehensive study, the figure can include a study on ensembles with different hyper-parameters. A more interesting baseline I will mention below is an ensemble with different depths. \\n\\nThe baselines considered in this paper only include ensembles with a fixed architecture. It would be more convincing if the authors can include other baselines which include ensembles with different architectures (without neural architecture search). For example, one naive baseline would be ensembling DeepEns(Optimal) with different depths (fully trained independently). This highlights the need for neural architecture search. It also helps to understand how much diversity in architecture (among ensemble members) we need in order to achieve desired diversity in ensemble predictions. Additionally, it is encouraged to compare to hyper-parameter ensembles. This uncovers the question of which axis (hyper-parameter & architectures) is more effective in promoting an ensemble\\u2019s performance.\\n\\nAnother missing part in this paper is the cost analysis of NES and how much does it increase compared to deep ensembles. Both NES-RS and NES-RE require training after sampling one neural architecture. This leads to a non-trivial computational overhead compared to traditional deep ensembles. \\n\\nIn section 4.3, it mentioned that a proportion of validation data encapsulates the belief about test-time shift. The authors didn\\u2019t mention whether the same protocol is applied to baselines (DeepEns). This leads to a question that is the improvement on out-of-distribution calibration coming from NES or shifted validation set. I consider this is a minor issue because Figure 21 in the appendix also shows that the clear improvement without shifted validation data.\\n\\nTable 1 shows that larger ensemble size leads to worse ensembling performance. This is against our general intuition in deep ensembles where more ensemble members lead to better performance. My guess is more ensemble members leads to optimization difficulties in NES. I expect to see more discussion on this observation.\\n\\nOverall, the authors propose a compelling solution to automatically design the neural network architectures in deep ensembels. However, the cons slightly outweight the pros in this version.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
W75l6XMzLq | Hindsight Curriculum Generation Based Multi-Goal Experience Replay | [
"Xiaoyun Feng"
] | In multi-goal tasks with sparse rewards, it is challenging to learn from tons of experiences with zero rewards. Hindsight experience replay (HER), which replays past experiences with additional heuristic goals, has shown it possible for off-policy reinforcement learning (RL) to make use of failed experiences. However, the replayed experiences may not lead to well-explored state-action pairs, especially for a pseudo goal, which instead results in a poor estimate of the value function. To tackle the problem, we propose to resample hindsight experiences based on their likelihood under the current policy and the overall distribution. Based on the hindsight strategy, we introduce a novel multi-goal experience replay method that automatically generates a training curriculum, namely Hindsight Curriculum Generation (HCG). As the range of experiences expands, the generated curriculum strikes a dynamic balance between exploiting and exploring. We implement HCG with the vanilla Deep Deterministic Policy Gradient(DDPG), and experiments on several tasks with sparse binary rewards demonstrate that HCG improves sample efficiency of the state of the art. | [
"reinforcement learning",
"multi-goal task",
"experience replay"
] | Reject | https://openreview.net/pdf?id=W75l6XMzLq | https://openreview.net/forum?id=W75l6XMzLq | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"oTwFIDP58NF",
"Y_jx9d3MH9X",
"AAoZKG_Mn8W",
"uSBk8alBhp",
"fPgrizr4Ycs"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040512878,
1604027031245,
1603992233931,
1603980475362,
1603889602850
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3618/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3618/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3618/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3618/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper extends the idea of hindsight experience replay (HER) to learn Q functions with relative goals by constructing a distribution over relative goals sampled from a replay buffer using a clustering algorithm. This approach is evaluated on three multi-goal RL environments and is shown to learn faster than baselines.\\n\\n${\\\\bf Pros}$:\\n1. Faster convergence as compared to baselines\\n2. Interesting use of clustering in the context of HER but this choice is made without strong justifications or formal arguments\\n\\n${\\\\bf Cons}$:\\n1. Some of the key choices made in this paper are not justified or explained property, e.g. - the goal sampling strategy, choices made in the clustering algorithm and associated heuristics, implicit assumptions (e.g. R1 raised the question of using L2 distance in measuring metrics between two states) \\n2. There are several choices made without sufficient formal arguments, verification or guarantees. \\n\\nThe paper studies an interesting problem but could be made stronger by incorporating feedback received during the discussion period.\"}",
"{\"title\": \"The paper has some major issues\", \"review\": \"This paper developed methods for resampling from the hindsight experience replay buffer. The resampling strategy was developed based on the current policy, and the overall distribution of the relative goals. As the distribution over goals evolves over time, the multi-goal agent's replay curriculum is adjusted throughout the learning process. The developed approach, called hindsight curriculum generation (HCG), was applied to DDPG, and evaluated using a set of four robot control problems. Results show that HCG performed better than a few baseline methods, and its performance was claimed to be insensitive to the choice of hyper-parameters.\\n\\nThe paper has a few issues. The sampling from hindsight experience is partially based on the likelihood of the corresponding state-action pair under the current policy. The reviewer is not sure that this strategy makes sense when the developed approach was applied to off-policy RL methods (DDPG in this case). It seems to be suggesting that, without exploration (completely following current policy), off-policy RL methods get the best results. Using only recent policies limits the variance over the collected experience. Some more discussions and justifications are needed for \\\"the likelihood of the corresponding state-action pair under the current policy.\\\" \\n\\nThe two baselines of CHER and HER-EBP were not mentioned in the experiment section. The reviewer had to search the whole paper, and found the acronyms mentioned in the introduction section. The results are suspicious: how come CHER and EBP performed even worse than naive HER in Figure 2? The results are inconsistent to those reported in the CHER and EBP papers. \\n\\nIt's stated that \\\"Results in Figure 3 indicates that the choice of L is robust.\\\" This is apparently not the case from Figure 3. In the pick-and-place task, when L=5, it reached 0.8 success rate in 15 epochs, whereas the agent couldn't succeed at all when L=1 or L=0.5. The conclusion was not supported by evidence or experimental results. \\n\\nThe paper mentioned \\\"Appendix\\\" in a few places, but there is no appendix in this submission. \\n\\nIt's suggested to experiment with RL methods other than DDPG. There's the potential of applying the developed approach to on-policy methods that have been evident to performing better than DDPG in robot control tasks.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Detailed ablation studies are needed to understand why the approach works; The idea needs to be better motivated and presented.\", \"review\": \"The paper introduces an extension of Hindsight Experience Replay (HER) called Hindsight Curriculum Generation (HCG) which is demonstrated to learn faster in multi-goal RL benchmarks.\", \"the_approach_consists_of_two_distinct_contributions\": \"First, is the idea of learning Q-values as a function of relative goals, instead of absolute goals. Second, is the idea of constructing a distribution over the relative goals from the replay buffer using a simple clustering algorithm and defining a sampling distribution over them.\\n\\nThis approach is then evaluated on three multi-goal RL environments that were previously open-sourced by OpenAI. Learning curves from these domains show that their approach HCG can produce faster learning when compared to the baselines.\", \"pros\": \"The approach seems to produce faster learning compared to the baselines considered.\", \"cons\": \"The paper seems to be rushed, making it hard to follow the ideas presented in the paper. It also has a few typos.\\n\\nThe motivation for the introduced approach is hard to understand, which makes it difficult to understand why/when the approach works for a given domain. The goal-sampling strategy seems to be arbitrary and no motivation is presented here to justify such an approach.\\n\\nThe related work section in the paper is not detailed. There are many approaches (listed below) that look at discovering curriculums for improving the speed of learning, and these can be considered to be orthogonal to HER, making it applicable to the current setup the authors have considered. The authors need to discuss how their approach of producing a curriculum relates/differs with the curriculum-based approaches.\\n\\nThe algorithm section in the main text does not have the necessary details to help understand the approach. In the pseudocode, many notations are used to present the approach, but I do not see the definitions for them in the main text. \\n\\nFrom the experiments, it is not possible to tell whether the improvement in observed performance is due to the idea of using relative goals or the goal-sampling strategy. I would suggest introducing a baseline HER agent that operates on the relative goals, similar to the HCG agent. This should help inform which part of the idea is important.\\n\\nTheorem 1 in the paper does not seem relevant to the approach and seems to be arbitrarily presented. It would be better if the authors could clarify how this theorem connects to their approach.\\n\\nForestier, S., Portelas, R., Mollard, Y., & Oudeyer, P. Y. (2017). Intrinsically motivated goal exploration processes with automatic curriculum learning. arXiv preprint arXiv:1708.02190.\\n\\nVeeriah, V., Oh, J., & Singh, S. (2018). Many-goals reinforcement learning. arXiv preprint arXiv:1806.09605.\\n\\nFlorensa, C., Held, D., Geng, X., & Abbeel, P. (2018, July). Automatic goal generation for reinforcement learning agents. In International conference on machine learning (pp. 1515-1528).\\n\\nGraves, A., Bellemare, M. G., Menick, J., Munos, R., & Kavukcuoglu, K. (2017). Automated curriculum learning for neural networks. arXiv preprint arXiv:1704.03003.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An okay paper with many limitations\", \"review\": \"### Summary\\nThis paper focuses on the problem of goal conditioned reinforcement learning. The authors propose an alternative way of performing Bellman updates for goal conditioned value function. Specifically, the proposed method first partitions the space of state goal pairs by performing a K-means clustering, and then estimates the visitation frequency of a state goal pair to be inversely proportional to the maximum difference of Q values within the cluster. For state goal pairs with low visitation frequency, the authors construct a lower bound of the Bellman target value by finding the Q value of a near state goal pair and subtracting the Lipchitz value multiplied by the distance. The authors then use this lower bound as target value to update the Q function. To generate goals for hindsight replay, the authors adopted a skew-fit type method to the empirical distribution of the K-means clusters to sample goals for replay.\\n\\nThe authors evaluate the proposed method on 3 simulated robotics manipulation tasks and compare against HER-EBP and CHER as baselines. The results indicate that the proposed method outperforms prior methods in terms of sample efficiency. The authors also provide ablation studies for the Lipschitz constant hyperparameter.\\n\\n\\n### Comments\\nThe paper is well written and the idea proposed in this paper is easy to follow. The authors clearly demonstrate the advantage of the proposed method in the 3 robotic manipulation tasks. However, I do have some concerns about the proposed method and results.\\n\\nFirst of all, the proposed method seems to be based on some heuristics which have neither been proven to be correct nor been verified empirically. For example, the authors estimate the visitation frequency of a state goal pair to be inverse of max Q function variation within the cluster (equation 4). It is not clear why this is a good estimate, since it is possible for an entire cluster to have low visitation frequency and also low Q value variations because the Q function has not been trained much in the region of the cluster. It would be important to provide either a proof or an empirical study.\\n\\nMoreover, the proposed method relies on some assumptions that might not hold true for many goal conditioned environments. For example, one assumption lies in the use of L2 distance in equation 3. In many goal conditioned tasks such as maze navigation, the L2 distance might not be a good metric between state goal pairs since two states close in L2 distance could be on two sides of a wall. The paper does not include discussion or empirical evaluations for such tasks.\\n\\nThirdly, it is not clear to me the Bellman iteration in equation 6 would converge to the optimal value. The authors only prove that it is a contraction and therefore will converge to some fixed point. However it is unclear to me whether the fixed point would be the same optimal Q value as the unmodified Bellman iteration.\\n\\nFinally, the proposed method introduces a few extra hyperparameters, such as the cluster K, the Lipchitz constraint of Q function L and the visitation frequency threshold. From the ablation study, we know that the proposed method is sensitive to the choice of L. Therefore, the natural question is whether the observed performance improvement is due to the tuning of these extra hyperparameters. Therefore, it would be important to perform more experiments on a wider range of tasks such as those in [1] and [2].\\n\\nDue to these limitations, I would not recommend acceptance for this paper before they are addressed.\\n\\n\\n\\nReferences\\n\\n[1] Pong, Vitchyr, et al. \\\"Temporal difference models: Model-free deep rl for model-based control.\\\" arXiv preprint arXiv:1802.09081 (2018).\\n[2] Eysenbach, Ben, Russ R. Salakhutdinov, and Sergey Levine. \\\"Search on the replay buffer: Bridging planning and reinforcement learning.\\\" Advances in Neural Information Processing Systems. 2019.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Hard to follow; insufficient validation; not enough detail to reimplement; should revise/improve before resubmission\", \"review\": \"The authors introduce a wealth of changes to the standard HER agent and obtain a performance improvement in 3 multi-goal tasks. The main observation made by the paper is that relabeled experiences may be very off-policy / out-of-distribution, and so value estimates for such experiences will be bad.\", \"to_help_with_this_the_authors_propose_the_following\": [\"apply K-means to cluster real (state, goal) tuples.\", \"estimate the likelihood of a given (state, goal) tuple X by finding the closest cluster center in real experience, sampling n real experiences Y^i in that cluster, and taking the minimum of 1/d(X, Y^i).\", \"they change the Bellman targets in case the likelihood estimate is low. In particular they change it to a lower bound based on some nearby real experience minus the distance times Lipschitz constant.\", \"they use relative goals (g_original - g_current) instead of absolute goals (g_original).\", \"there is some kind of curriculum on goal relabeling\", \"This paper is hard to follow. The word usage and sentence structure is unnatural, and I find myself guessing at what exactly the authors mean. This carries through to the math. I think I understand what the modified Bellman backup above equation (6) is doing, but I'm still not entirely sure. I'm also not really following the Section that includes equation (8). As a result, the contributions are a bit unclear.\", \"Theorem 1 is not trivial and there is indeed doubt in my mind. A proof should be provided upon revision.\", \"The related works section can be greatly improved. You should be relating the related work to your own.\", \"An appendix was not provided, despite being referenced, and so the paper is missing additional results (the 3 environments are insufficient), implementation details, and hyperparameter details. Without these, this paper cannot be reimplemented and is not in a publishable state.\"], \"nits\": \"- Isn't Equation (5) just the definition of Lipschitz continuity, so I'm confused by what is meant by \\\"it's reasonable to claim that [it] holds\\\".\\n-\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
Mu2ZxFctAI | Uncertainty-aware Active Learning for Optimal Bayesian Classifier | [
"Guang Zhao",
"Edward Dougherty",
"Byung-Jun Yoon",
"Francis Alexander",
"Xiaoning Qian"
] | For pool-based active learning, in each iteration a candidate training sample is chosen for labeling by optimizing an acquisition function. In Bayesian classification, expected Loss Reduction~(ELR) methods maximize the expected reduction in the classification error given a new labeled candidate based on a one-step-look-ahead strategy. ELR is the optimal strategy with a single query; however, since such myopic strategies cannot identify the long-term effect of a query on the classification error, ELR may get stuck before reaching the optimal classifier. In this paper, inspired by the mean objective cost of uncertainty (MOCU), a metric quantifying the uncertainty directly affecting the classification error, we propose an acquisition function based on a weighted form of MOCU. Similar to ELR, the proposed method focuses on the reduction of the uncertainty that pertains to the classification error. But unlike any other existing scheme, it provides the critical advantage that the resulting Bayesian active learning algorithm guarantees convergence to the optimal classifier of the true model. We demonstrate its performance with both synthetic and real-world datasets. | [
"Active learning",
"Bayesian classification"
] | Accept (Poster) | https://openreview.net/pdf?id=Mu2ZxFctAI | https://openreview.net/forum?id=Mu2ZxFctAI | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"IvfpCpTeDti",
"MavdkqUY-7R",
"hLSBOgvjmNp",
"Ug1_pySg0gc",
"_orRqzyzXZu",
"be5FommsQH",
"Cl9j9TBZ47d",
"ODa07P0mvlH",
"jXdJ_onw86Q",
"TnMvSoAw1zJ",
"FkhoPsHrZj"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040432066,
1606152404114,
1606151751128,
1606150333373,
1606149723700,
1606149321832,
1606147529615,
1604741002301,
1603884073441,
1603879813940,
1603855813182
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3616/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3616/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3616/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3616/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3616/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3616/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3616/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3616/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3616/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3616/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper proposed weighted-MOCU, a novel objective-oriented data acquisition criterion for active learning. The propositions are well-motivated, and all reviewers find the analysis of the drawbacks of several popular myopic strategies (e.g. ELR tends to stuck in local optima; BALD tends to be overly explorative)) interesting and insightful. Reviews also appreciate the novelty of the proposed weighted strategy for addressing the convergence issue of MOCU-based approaches. Overall I share the same opinions and believe the paper offers useful insights for the active learning community.\\n\\nIn the meantime, there were shared concerns among several reviewers in the readability (structure and intuition), lack of empirical results on more realistic active learning tasks, and limited discussion on the modeling assumptions. Although the rebuttal revision does improve upon many of these points, the authors are strongly encouraged to take into account the reviews, in particular, to further strengthen the empirical analysis and discussions, when preparing a revision.\"}",
"{\"title\": \"Response to AnonReviewer2 Part 2\", \"comment\": \"Q5. Discussion on other converging method.\\n\\nA5. (1) We agree that many methods can converge to the true model asymptotically, and as a result, can converge to the optimal classifier of the true model, including cyclic sampling and BALD. However, these methods may not sample efficiently as they may reduce the model uncertainty that do not affect classification performance as we explained. \\n\\n(2) The significance of our proof is that: 1) ELR/MOCU based methods directly reduce the uncertainty affecting classification but, as we analyzed, they may not converge due to the myopic nature. 2) Our weighted-MOCU based method approximates the ELR method and still directly reduces the uncertainty affecting classification. Therefore it is more efficient, by prioritizing queries directly improving classification performance, than other converging methods that reduce total uncertainty, including BALD. This also has been shown empirically in different setups in our experiments.\\n\\nQ6. Complexity of class space. \\nA6. Consider a large class space with size $M$. In WMOCU function, the predictive distribution (line 22 in Algorithm 1) is calculated for $O(MN_xN_{\\\\theta})$ times. In ACQUISITIONFUN, WMOCU is called for $M$ times. So the complexity of calculating the acquisition function is $O(M^2N_xN_{\\\\theta})$. \\n\\nQ7. Performance on different noise level \\nA7. Data noise levels can affect the efficiency of different active learning methods. We have run preliminary tests on different noise levels. It is clear for some methods, such as MES, depending on uncertainty of candidates, the performance can degrade significantly with high noise. For the other methods, their performances are not that sensitive to the noise level and the performance trends are similar to the random sampling benchmark. \\n\\nQ8. Performance gap between ELR and weighted MOCU. \\nA8. Regarding the large gap between ELR and WMOCU in Fig. S8, obtained from real-world datasets, in contrast with the averaged performance with different true models in our synthetic experiments, it is expected that some methods perform much better than others, depending on the underlying feature-label distributions and data quality. \\n\\nWe have explained the confusion term \\u2018side\\u2019 in the revision, and we have added error bars to our results to help better understand the performance trend.\"}",
"{\"title\": \"Response to AnonReviewer2 Part 1\", \"comment\": \"Thank you for your comments and helpful suggestions. We address your specific questions and concerns below.\\n\\nQ1. Need more experiments in high-dimensional space and more complex models, such as neural networks. Performance improvement is limit. \\n\\nA1. (1) We would like to emphasize that the settings of pool-based Bayesian active learning are to address the lack of labeled data. The presented work aims to develop a label-efficient active learning method that provides both short-term and long-term improvements (in terms of prediction performance per candidate training sample to query), importantly with a strong theoretical guarantee of convergence. This theoretical guarantee is extremely important when the feature dimension and model complexity increases, as it ensures that our active learning scheme will converge to the true optimal classifier. Our complexity analyses show that the time complexity increases linearly with the sizes of both feature and output spaces. The main purpose of the presented experimental results is to validate our theory with empirical demonstration of the issues of the existing active learning methods and the convergence of our weighted-MOCU based method. With increasing feature dimension and model complexity, it can take much more computing hours to validate the convergence of active learning algorithms to the optimal classifier, and this is precisely why a theoretical guarantee is extremely important \\u2013 as it provides confidence that the algorithm will converge to the optimal classifier, without having to empirically show it based on significant computations (or actual sample acquisitions, which is unrealistic). \\n\\n(2) Regarding the application to neural networks, as we focus on Bayesian classifiers here, we would need to implement Bayesian neural networks to test these methods, for which training can further add more computational burden. Hence, we leave these practical applications for future work but focus on fundamental theoretical contributions in this submission, along with empirical demonstration of its importance. Last but not least, we would like to emphasize that our weighted MOCU has the same complexity as ELR, and our method is also practical as demonstrated by previous papers with ELR methods.\\n\\n(3) Regarding performance improvement, in this paper, the performance comparison between ELR and weighted MOCU have verified the theorem showing that our weighted-MOCU based method achieves data efficiency both at the beginning of the active learning procedure and in the long run, by focusing only on the uncertainty that directly affects classification \\u2013 instead of the total uncertainty by BALD. The experimental results in different setups have shown consistent improvements over existing methods. Of course, the improvement is dependent on the underlying feature-label distributions and data quality and therefore in some setups, the empirical improvement can be limited. However, the proposed method has clear advantage over other methods (e.g., MES and BALD) too, which may perform poorly for complex models in which some model uncertainty may not directly affect the learning objective of interest.\\n\\nQ2. Notation \\nA2. In the revised paper, we have removed the (*) with simplified but consistent notations.\\n\\nQ3. Algorithm pseudocode \\nA3. We have added the pseudocode, together with computational complexity analysis, of calculating the acquisition function to the main text in our revision.\\n\\nQ4. Validation of the theory. \\nA4. We have added an additional experiment in Appendix G to compare the ELR and weighted MOCU methods for classification with noisy observations. With noisy observations, we are not sure of the optimal prediction with finite observations. Therefore, the MOCU value during the learning procedure should always be positive. In Fig. S5, we have shown the value changes of MOCU and the maximum of the acquisition function during the active learning procedures. In contrast to the ELR method getting stuck, in our weighted MOCU method, the MOCU is positive and continues to decrease during the whole procedure; and the maximum of the acquisition function is also positive, indicating the learning procedure does not get stuck. The MOCU value reaching a very small value ($10^{-8}$) at the end can validate the theory of our asymptotic convergence. Fully verifying the asymptotical property is impossible, once again, this is exactly why the theoretical convergence proof is very important.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thanks for your review. We address the concerns as follows:\\n1. The intuition behind the weight choice. \\nWe have added Fig. 2 in the revision to show the comparison of the MOCU and weighted MOCU function. We believe that it can provide further intuition why the strict concavity is desired for weighted MOCU to guide active learning efficiently considering both short-term and long-term gains. We have also generalized the weighting scheme and the weight is chosen as $1-cK$, with $c$ controlling the approximation of weighted MOCU to the original MOCU. From Fig.2 we can see with $0<c\\\\leq 1$, the weighted-MOCU function is below the original MOCU and is strictly concave.\\n2. The limit of proof. \\nRegarding non-uniform utility, we assume that the reviewer meant that the false positive cost and the false negative cost can be different. In that case, the expression of MOCU and $K$ will change, but they are still piece-wise linear functions of the $\\\\pi(\\\\theta)$, and we should still choose the weight as $1-cK$. Only this time the feasible range of $c$ to have the weighted MOCU to be concave will change, depending on the utility function. As a result, our weighted-MOCU based active learning still has the theoretical convergence guarantee while ELR may still get stuck. \\nWith continuous parameters, $\\\\pi(\\\\theta)$ is a distribution, and then $K$ and $G$ are both functionals. We believe our proof should still be valid.\\n3. Complexity comparison. \\nAssume all competing algorithms are applied on the same setup of pool-based active learning with $N_x$ candidates and $N_{\\\\theta}$ parameters. ELR has the complexity of $O(TN_x^2N_{\\\\theta})$, same as our weighted MOCU method. BALD and MES have a complexity of $O(TN_xN_{\\\\theta})$, which is the complexity of computing the posterior. In our original submission, the complexity analysis was in Appendix A. We have reorganized the submission with complexity analyses in both Section 3 and Appendix B in the revised version. \\n4. The confusing statement. \\nThat statement emphasizes the difference between converging to the true model and converging to the true optimal classifier. We have tried to revise that statement in the revised version for clearer presentation.\"}",
"{\"title\": \"Response to AnonRevierwer4\", \"comment\": \"Thanks very much for your comments\\n1. Details of what happened in the ELR active learning \\nWe have added an additional experiment in our Appendix G to compare the ELR and weighted MOCU methods. In Fig. S5, we have shown the value changes of MOCU and the maximum of the acquisition function during the active learning procedures. In the figure, we can see in ELR active learning, after 22 iterations, the maximum of the acquisition function turns to 0 while the MOCU gets stuck on a positive value. Therefore, ELR gets stuck and is degenerated to random sampling based on the adopted tie-breaking strategy. On the other hand, in our weighted-MOCU based method, the MOCU is positive all the time and keeps decreasing and the maximum of the acquisition function is also positive all the time. Therefore, it can query the candidates effectively both at the beginning of the active learning procedure and in the long run. \\nWe have also added Fig. 2 in the revision that can provide further intuition on why ELR/MOCU-based method may get stuck. We also have explained how the strict concavity forced by our weighted MOCU can efficiently guide active learning to approach to the true optimal classifier in the long run. \\n\\n2. The limit of proof \\nWe believe that we can extend the proof to the cases where the support of $\\\\theta$ is continuous. In that case, $\\\\pi(\\\\theta)$ is a distribution, and then $K$ and $G$ are both functionals. Lemma 2 should still be valid. \\nWhile for the cases where the support of $x$ is continuous, the proof can be difficult. However, we'd like to emphasize that our paper focuses on the pool-based active learning as we stated throughout the paper, where the support of $x$ is indeed discrete. \\n\\n3. Extension for multi-class classification problem \\nWe have included an algorithm in Appendix F to solve the multi-class classification problem with corresponding discussion. We note that we have not proved its convergence. \\nSince the OBC predictions depend on the predictive distribution $p(y|x)$, we require the weighting function to capture the update of $p(y|x)$ given one single query. The current weight $(1-cK)$ only depends on $\\\\max_y p(y|x)$, which is enough for binary classification problems as $\\\\max_y p(y|x)$ changes if the $p(y|x)$ changes. However, that's not true for multi-class cases as shown by the counter examples in Appendix E. So in Appendix F, we propose to use the softmax of $p(y|x)$ as the weighting function because it's concave and can capture the change of $p(y|x)$. As we have shown in Fig. S2, the performance of this new weighted MOCU (weighted-MOCU2) is better than the original weighted MOCU on the three-class classification problem. \\n\\nWe have clarified our proofs and fixed the incorrect notations in our revised version.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"1. Revision\\nWe thank the reviewer for constructive critiques and have significantly revised the paper following the reviewer's suggestions. We have revised the abstract for a clearer summary. In the introduction, we have removed some descriptions of competing algorithms and emphasized that we focus on the active learning methods that directly maximize the learning performance. We have separated the MOCU introduction in a new subsection. Moreover, we have significantly revised Section 3.2 and added Fig. 2, which can provide a better intuitive illustration of MOCU and weighted MOCU differences for understanding the issues of ELR/MOCU methods. We believe it also better motivates our weighted-MOCU based active learning method development. \\n\\n2. Significance of the convergence proof \\nWe agree that many methods can converge to the true model asymptotically, and as a result, can converge to the optimal classifier of the true model including cyclic sampling and BALD. However, these methods may not sample efficiently as they may reduce the model uncertainty that do not directly affect the classification performance as we explained.\", \"the_significance_of_our_proof_is_that\": \"1) ELR/MOCU based methods directly reduce the uncertainty affecting classification, but we analyzed that they may not converge due to the myopic nature. 2) Our weighted-MOCU based method approximates the ELR method and still directly reduces the uncertainty affecting classification. Therefore it is more efficient, by prioritizing queries directly improving classification, than other converging methods, including BALD, which has been shown empirically in different setups in our experiments.\\n\\n3. Experiments are only on toy datasets. \\nWe would like to emphasize our paper focuses on theoretical results to make sure that our weighted-MOCU active learning can achieve better data efficiency than existing methods. With the datasets in different setups, including the real-world UCI datasets, we have validated our theoretical results empirically. Specifically, we focus on the performance comparison between ELR and weighted MOCU to verify the theorem. We also have tried to show that other methods (MES and BALD) that reduce the total model uncertainty (instead of only the objective uncertainty that actually affects learning objectives) can perform poorly. We note that these active learning methods can be implemented to more complicated datasets and models but demonstrating the expected convergence can take time. We will implement these algorithms for other datasets and evaluate them accordingly.\"}",
"{\"title\": \"General response\", \"comment\": \"General response\\nWe thank all the reviewers for their time and efforts in reviewing the paper and provide constructive suggestions. We have significantly revised our paper based on all four reviewers\\u2019 comments and we would like to highlight our major changes in our revision as follows: \\n1. We have reorganized the paper to improve the readability by rearranging the lemmas, proofs, and algorithms presentation with complexity analysis as suggested. \\n2. We have revised Section 3.2 to provide clearer analysis on the myopic issue of ELR methods. \\n3. We have added Fig. 2 and discussions in Sections 3.2 and 3.3 to intuitively illustrate the difference between MOCU and the weighted MOCU. The detailed setup for Fig. 2. is included in Appendix D. \\n4. In Section 3.3 we have generalized the weighting function as $1 - cK$ with an additional parameter $c$. This parameter can be used to balance the trade-off between short-term and long-term benefits of the proposed active learning method.\\n4. We have added a new experiment in Appendix G to compare ELR and weighted-MOCU based methods by showing the MOCU and acquisition function changes during the active learning procedures (Fig. S5). The experiment directly validates Lemma 4 and Theorem 1. \\n5. In Appendix G, we have also added another additional set of experiments to compare performance of different active learning methods under different noise levels (Fig. S6).\"}",
"{\"title\": \"Official Blind Review #2\", \"review\": \"This paper studies the label solicitation strategy in active learning. In particular, it focuses on the expected loss reduction (ELR) strategy, analyzes its problem, and modifies the original ELR method to make sure the active learner converges to the optimal classifier along learning iterations. The paper provides theoretical guarantees on the new method\\u2019s convergence. In the experiment, the proposed method is evaluated on synthetic data and UCI data. The improvement margin over the existing method is very limited.\", \"strong_point\": \"1. The paper\\u2019s finding on the existing ELR method is interesting and novel.\\n2. The theoretical analysis of the convergence of the proposed method seems to be sound.\", \"weak_point\": \"1. The experiment is conducted on low-dimensional data and the proposed method\\u2019s performance is not very competitive.\\n2. The notation in this paper can be confusing to readers, especially the use of (*). Star usually means the \\u201coptimal\\u201d.\\n3. The main paper does not have an algorithm.\\n4. There is no validation in experiments for the theory. In the synthetic experiment, it should be possible to simulate a case with ground truth optimal classifier and verify whether the proposed method actually converges to the optimal.\\n \\nMy major concern is the practical impact of the proposed method. Therefore, I recommend a weak reject for this paper (5).\", \"additional_questions_and_suggestions\": \"1. I think the paper would be improved if there is a discussion on how the proposed method can be extended to deal with high-dimensional data and/or using deep learning models.\\n\\n2. It seems to me the proposed weighted method is not the only way to guarantee convergence. But I am not sure about that. It would be nice to have some discussion about that.\\n\\n3. The one-step-look-ahead strategy involving expectation model change or expected loss reduction usually suffers from the large class space for computing the expectation. The experiments are mostly conducted in a small class space. It would also be good to have a discussion about the complexity in terms of class space.\\n\\n4. MES usually suffers a lot from noise (experiments in the appendix also show that). ELR methods are usually more robust to noise. I was wondering whether different noise level has been tried and how the proposed method compare with ELR on that. A similar question is also, for certain data there is a larger gap between the proposed method and ELR (in appendix). What would be the reason? \\n\\n5. The visual presentation can be improved for the paper, as well as the explanations. In figure 1\\u2019s explanation, I got very confused by the \\u201cside\\u201d. What does that mean?\\n\\n6. The results should be shown with error bars if experiments are conducted multiple times.\\n \\n================\", \"update_after_rebuttal\": \"I increased the score to 6 and appreciated the revision of the paper. The readability is improved. However, I also have different opinions with the authors in terms of how empirical evaluation of algorithms should be regarded in active learning research. So I would further encourage the authors to apply their method on high-dimensional large scale data, even it may take a lot of computing resources or require actual sample acquisition. \\n\\nI agree that the goal of active learning is to reduce the burden of labeling data. But it does not conflict with the requirement of dealing with high-dimensional (feature space) data. Also, I see a lot of active learning works focusing on theoretical analysis but cannot be easily put into real-world applications, which actually undermines the significance of the theory to some extent. In the real world, a lot of assumptions would be violated. As the authors also mentioned that, it is \\\"expected\\\" that different feature space and data quality affects the performance. Therefore, I think the theory does not spare us from justifying our methods in practice. \\n\\nLast but not least, actual sample acquisition is not unrealistic if given real-world problems. So I encourage the authors to further demonstrate the nice properties of the proposed algorithms in more realistic settings in the future.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A fairly good submission\", \"review\": \"This paper addresses the active learning paradigm in which the learner queries an oracle to obtain the class label of some inputs. Depending on the querying strategy, the learner can improve its classification model more or less efficiently.\\n\\nAmong other possibilities, the authors focus on a class of methods known as Expected Loss Reduction, which is optimal in one-step-ahead risk minimization. However, the authors shed light on the fact that the long-term effect of this strategy do not necessarily allow to reach an optimal classifier. They thus introduce an alternative approach that achieves this goal by focusing on loss uncertainty.\\n\\nMore precisely, the authors introduce the Mean Objective Cost of Uncertainty (MOCU) that captures the expected difference between the error of Bayesian optimal classifier (BOC) and the expectation (against the parameter theta posterior) of the error of the theta-best classifier. An active learning strategy can devised by looking for a new input that (roughly speaking) will cause the largest MOCU drop.\\n\\nBecause the strategy consists in selecting inputs that maximizes MOCU drop, MOCU will decrease as new class labels are revealed but this does not imply that MOCU will reach its minimum (zero). To achieve long run convergence of MOCU to zero (and thus get obtain the optimal classifier), the authors propose a so-called weighted strategy that solve this issue. \\n\\nFinally, the authors provide fair numerical experiments on both synthetic and real datasets. The results indicates that the proposed strategy seems to provide good results in a wider range of situations as compared to SOTA.\", \"major_remarks\": \"Although the authors provide some proof that the weight function can solve the long-run convergence issue, I wish they would provide intuitions as to why their particular choice can choose inputs that will be beneficial in this regard. The chosen weight function is going in the opposite direction of MOCU. Its effects are thus hard to interpret although it is instrumental to obtain a concave functions for the proofs.\\n\\nThe major drawback of the paper is that the scope of the proofs is very limited. Can the authors give insights as to what would remain valid in more realistic situations in which one has class imbalance, non-uniform utility (more general loss than 0-1), continuous parameter space ? The proof (unless I am mistaken) also do not account for approximation errors incurred by replacing expectation with empirical versions. \\nA few comments are provided in the appendices concerning multi-class problems.\\nI, however, reckon that the numerical experiments are reassuring \\n\\nAn algorithm is provided in the appendix but some comment on complexity as compared to prior arts in the main text would be appreciated.\", \"minor_remark\": \"\", \"i_am_bewildered_by_this_statement\": \"\\\"Converging to the true model is unnecessary and inefficient for classification\\\". Should this be understood as the myopic strategies standpoint ?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review for the paper\", \"review\": \"\", \"summary\": \"This paper provides an interesting algorithm to address the previous Bayesian active learning query strategy in (binary) classification. By the simple modification, the algorithm can overcome the drawbacks of ELR in the convergence to the optimal classifier parameterized by $\\\\theta_r$. In experiments, the proposed algorithm can achieve the advantages of ELR and BALD simultaneously.\", \"reasons_for_score\": \"Overall, I vote for the marginally above the acceptance threshold. The proposed methodology is impressive and well-motivated. However, there are some lacks in addressing the problem of ELR in a qualitative manner. The assumptions of theorems look strong. I feel that there can be room to be improved in this paper.\", \"pros\": \"1. This paper was well motivated by the drawbacks of ELR and insightful comparison of BALD and ELR. In Bayesian approaches, these issues can be interesting and valuable.\\n2. The proposed algorithm is simple and addresses the problem caused by only mitigating the mean difference. The proposed algorithm can diminish the mean difference and ensure that $M^w ( \\\\pi^*(\\\\theta) )$ converges to 0. \\n3. The proposed algorithm can dominate the random and ELR. Also, the prior can be used to provide better results.\", \"cons\": \"1. The problem of ELR is not verified thoroughly. The stuck in the convergence of ELR can be due to the lack of considering the long term effects. However, the detail of this phenomenon is not verified in a more detailed manner. Can you show the details of what happened in the ELR active learning? At least, I want to see the values of $U$ and $M$ when the ELR is used. \\n2. The proofs assume that the supports of $\\\\pi(\\\\theta)$ and $x$ are finite, respectively, and prior is limited to the discrete-type probability measure. These assumptions can be a good starting point. However, it is better to provide any clue that we can extend this result to more general settings. \\n3. The counter-example of the proposed algorithm for multi-class in the Appendix shows this paper's prematurity in multi-class problems, and there are no details to address this problem. If you can provide some clues to address the multi-class problem, it is very helpful.\", \"minor_comments\": \"1. It is not easy to follow the proofs in Section 3.1. The authors claim that the lower bound of OBC error will be canceled in the (5). The equations after (5) can imply this cancelation. However, there is no direct wording to conclude this cancelation.\\n2. In the equation of $\\\\sum_y p^*(y | x) \\\\pi^*( \\\\theta | x, y) = \\\\pi^*( \\\\theta )$, $\\\\pi^* ( \\\\theta ) = \\\\pi^*(\\\\theta | x),$ it is better to clarify that $\\\\pi^*$ is not affected by $x$. \\n3. In the proofs of theorem 1, the notation of $X_A (w)$ is not consistent with the previous notation of $X_A$. \\n4. The infinite querying for a fixed $x$ cannot be realistic in some cases. Therefore, the proof should be extended for the case that the support of $x$ is an open subset of $\\\\mathbb{R}^p$ where $p$ is the dimension of $x.$\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Difficult to read\", \"review\": \"The authors of this paper introduced a new acquisition function of active learning for optimal Bayesian classifier. The new query strategy is based on mean objective cost of uncertainty, defined as the expected difference between losses of the optimal Bayesian classifier and the optimal classifier.\\n\\nI think this paper can benefit from revisions to improve clarity. Unfortunately, in the current state of writing, I found it very difficult to understand what this approach is doing exactly. The lack of clarity makes it hard to appreciate the interestingness of the proposed approach. In the following, I will list some possible improvements and my confusions.\\n\\n1, In the abstract, please rewrite the sentence \\\"To improve convergence ... classification error.\\\" This sentence is probably the most important summary of this work, but it's so long and dense that it's very difficult to parse. I believe people read abstract to get a general idea of what this paper is, not a dense summary of what the technical details are. \\n\\n2, The introduction should be called related work. Especially in 2nd-3rd paragraph, the authors tried to pack all competing algorithms in and explain why they don't work well. It is too detailed, I think. I expect more high-level descriptions of why this problem is important, where the field is now, or why the authors think this is an important problem to solve rather than, e.g., solving active learning for regression. \\n\\n3, The authors have a tendency of defining a symbol or an abbreviation, and expect the readers to register them in their memory. It would help the clarity significantly if the authors could just repeat in English what \\\\pi (or \\\\phi or C_\\\\theta , M, U etc) is again when they are mentioned. \\n\\n4, It's not clear to me why the entire introduction and definition of MOCU is under section 3.1 Analysis of ELR Methods. Perhaps section 3.1 needs to be segmented into more subsections.\\n\\n5, I'd recommend that the authors leave the most important theorem in the main paper and move all the less important lemmas and proofs to appendix. Then, the authors can have more space explaining the intuitions behind the proofs and the newly designed weighted MOCU. It is nice to see the convergence analysis, but it also makes me wonder how useful it really is. The theorem is saying, as we get infinite samples, we get the optimal classifier, under a bunch of conditions. It's nice to have, but it almost feels like every algorithm that can loop through all possible inputs can do that. What about the convergence rate that we care more about? Or how many active learning iteration is needed to achieve a certain performance.\\n\\n6, While it is unclear how useful the theoretical guarantees are, it is also unclear if the empirical results should enough evidence. Only toy datasets were examined, and the performance of the proposed approach is quite similar to other competitors.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}"
]
} |
HxzSxSxLOJZ | ResNet After All: Neural ODEs and Their Numerical Solution | [
"Katharina Ott",
"Prateek Katiyar",
"Philipp Hennig",
"Michael Tiemann"
] | A key appeal of the recently proposed Neural Ordinary Differential Equation (ODE) framework is that it seems to provide a continuous-time extension of discrete residual neural networks.
As we show herein, though, trained Neural ODE models actually depend on the specific numerical method used during training.
If the trained model is supposed to be a flow generated from an ODE, it should be possible to choose another numerical solver with equal or smaller numerical error without loss of performance.
We observe that if training relies on a solver with overly coarse discretization, then testing with another solver of equal or smaller numerical error results in a sharp drop in accuracy.
In such cases, the combination of vector field and numerical method cannot be interpreted as a flow generated from an ODE, which arguably poses a fatal breakdown of the Neural ODE concept.
We observe, however, that there exists a critical step size beyond which the training yields a valid ODE vector field.
We propose a method that monitors the behavior of the ODE solver during training to adapt its step size, aiming to ensure a valid ODE without unnecessarily increasing computational cost.
We verify this adaption algorithm on a common bench mark dataset as well as a synthetic dataset.
| [
"ode",
"training",
"neural odes",
"flow",
"equal",
"solver",
"resnet",
"numerical solution resnet",
"numerical solution",
"key appeal"
] | Accept (Poster) | https://openreview.net/pdf?id=HxzSxSxLOJZ | https://openreview.net/forum?id=HxzSxSxLOJZ | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"aSuZVg12y6",
"y7DGdAve4-Z",
"lwcCjq1FWB",
"lmW1kIw3qoa",
"Vqm1VAlfnj5",
"hIxrIm_h1B",
"3YYImpV3WHz",
"0Ny0QEZx0aM",
"PRCDuDHAZO",
"atW0xL3Do4",
"-5eKHv2W0cC",
"7HZcoXu7oqy",
"-H4t6-rVDxl",
"-o2M2cXCgSR",
"r40Ni7g-w2a",
"rce3BZrgkUo",
"SAu2VSh56bP",
"FZIll4EaW5M",
"i4zEL5XJEuh",
"7vCSfEs3_Hj",
"VlNf4akI95u",
"7Gca-Le9ank",
"D184PiAkMJQ"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040447617,
1606126473682,
1606071187275,
1605799911662,
1605799804369,
1605743977106,
1605662477709,
1605658304515,
1605658016512,
1605651487782,
1605605685965,
1605304979989,
1605299958205,
1605131932167,
1605131764893,
1605131573924,
1605131231651,
1605130996253,
1605130430009,
1603900365191,
1603873967691,
1603851866546,
1603848365683
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3615/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3615/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3615/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3615/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3615/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3615/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3615/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3615/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3615/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3615/Authors"
],
[
"~Juntang_Zhuang1"
],
[
"ICLR.cc/2021/Conference/Paper3615/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3615/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3615/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3615/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3615/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3615/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3615/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3615/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3615/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3615/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3615/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper considers whether Neural ODEs have a valid interpretation as an ODE, showing that such an interpretation is not correct unless the discretization is chosen properly. This is important, given interest in Neural ODEs as models as well as they way they will be used, both for problems involving physical/temporal data as well as more generally. The paper proposes an algorithm for adapting integration step-size during training to partially address this issue, and empirical results are shown. There was a detailed discussion between reviewers and authors which led to improvements. The authors should also discuss the relationship of their work with https://arxiv.org/abs/2008.02389, which makes a similar point, in the final version.\"}",
"{\"title\": \"End of revision period fast approaching\", \"comment\": \"Unfortunately, the revision period is ending soon. Reviewer 2, are there any last concerns/questions that you want us to address? We might not be able to improve the manuscript further until revision deadline, but we could try to incorporate them into a potential camera-ready version.\\n\\nFurthermore, have we addressed your concerns adequately to maybe improve your judgement of our paper?\"}",
"{\"title\": \"Revision 3\", \"comment\": \"As suggested by Reviewer 4 we have added additional experiments to our paper. These experiments are:\\n\\n- Experiments on MNIST: Fixed step solver experiments, adaptive step size solver experiments, step adaptation algorithm\\nwith Euler as train solver and Midpoint as test solver.\\n- Step adaptation algorithm with Euler as train solver and rk4 as test solver on CIFAR10 and Sphere2.\\n- Step adaptation algorithm with Midpoint as train solver and rk4 as test solver on CIFAR10 and Sphere2.\\n\\nThe experimental results can be found in the Supplementary Material Section B.\"}",
"{\"title\": \"Re: Clarifying reviewer 2 theoretical and empirical questions\", \"comment\": \"Thank you for your interest in the problem and all the interesting questions.\\n\\nWe have added a new section to the appendix including new analysis plots and a new plot for Lady W.s fan.\\n\\nBoth trajectory crossing and Lady W.'s fan contribute to the global error.\\nHeuristically, there exists a step size where the sensitivity of the global error \\nis smaller than the sensitivity of the downstream layer.\\nTherefore, there should exist an h* such that for all h<h* the accuracy no longer changes.\\nWe discuss this at the end of Section 2.2 but maybe to emphasize this point more.\\nOverall, we are unaware of any theoretical statement supporting our heuristic statements.\\nThe difficulty we see is that one has to not only look at the theory of an individual IVP but of \\nseveral IVPs. \\nWe have looked into this and are not aware of any theoretical work in this area.\\nOverall there are many subtle effects and not every model violation shows in the test accuracy.\\nFor example if we have inter-class crossings between trajectories, this does not change the \\ntest accuracy.\\n\\n\\nDoes this answer some of your questions? Are there any specific points you would like us to clarify?\\nAdditionally, are there any effects we should provide more detail on in our paper?\"}",
"{\"title\": \"Revision 2\", \"comment\": \"We have incorporated further improvements and uploaded a new draft of our paper.\", \"these_improvements_include\": \"- We improved the layout of the figures by making them wider and adding\\nlegends as suggested by Reviewer and Reviewer 4 and Reviewer 3. \\nAdditionally, we reduced the amount of data shown in each figure to make them clearer. \\n- We added a descriptive text to the Supplementary Material and improved the\\noverall layout.\\n- We added a new section to the appendix in answer to Question 2 of Reviewer 2.\\n- We have added the suggested additional reference.\\n- Changed the title to: ResNet After All? Neural ODEs and Their Numerical Solution\\n\\nWe have started experiments, as suggested by Reviewer 4, and hope to \\nadd additional experimental results soon.\"}",
"{\"title\": \"Minor title change\", \"comment\": \"I see your point. What about \\\"ResNet After All? Neural ODEs and Their Numerical Solution\\\" (colon replaced by question mark)?\"}",
"{\"title\": \"I am okay with the title\", \"comment\": \"To be fair, I have no hard feelings about the title. My point is that the title seems potentially misleading. A Neural ODE might not necessarily represent a continuous dynamical system if the discretization of the numerical method is too coarse, and in this case a Neural ODE is a ResNet after all! However, if the Neural ODE model is carefully trained then it does learn an approximation for the underlying continuous dynamical system, and in this case it is not a ResNet. That is an important aspect and it should be better discussed somewhere in the abstract, introduction or in the conclusion.\"}",
"{\"title\": \"Original title is good\", \"comment\": \"re: title suggestions\\nI think \\\"ResNet After All: Neural ODEs and Their Numerical Solution\\\" is a very apt title for this paper!\"}",
"{\"title\": \"Clarifying reviewer 2 theoretical and empirical questions\", \"comment\": \"re: 2. Interplay of solver and vector field\\n\\nMy apologies for the confusion about the use of \\u201canalytic\\u201d. I wanted to suggest a concrete question for the authors to answer, and so I wanted to suggest answering the questions in the rest of the paragraph for the case of f(z) being an analytic function (I could equally have said some other condition like infinitely differentiable, continuously differentiable, continuous, and so on). And, unfortunately, I confused the matter by talking about f(z) being continuous at the end of the paragraph. What I was intending to say was that, assuming some properties of f(z) that you should feel free to choose, can you give any mathematical guarantees on the presence/absence of crossing over?\\n\\nThe points you mention are exactly the sort of questions I think the paper would benefit from exploring more. In particular, you write:\\n1) \\u201cFinding a step size h* for which no crossings occur does not guarantee that no crossings will occur for all step sizes below this step size h*.\\u201d\\n2) \\u201cHowever, for each problem, there exists some sufficiently small step size h_s such that for all h < h_s, no crossings do occur.\\u201d \\n3) \\u201cI.e., an empirical first occurence of no crossings is not a sufficient condition for small enough step size, but a small enough step size always exists for which even smaller step sizes will produce no crossings.\\u201d\\n\\nFor example, going down the theoretical route, are there any theorems proving that \\u201ca small enough step size always exists for which even smaller step sizes will produce no crossings\\u201d? And what conditions are required for such theorems to hold? Presumably these conditions will require a mathematical definition of \\u201ccrossing over\\u201d (I know you\\u2019re looking into this re: 1. Number of crossings).\\n\\nMeanwhile, for a given ODE, can you always find \\u201ca step size h* for which no crossings occur\\u201d that \\u201cdoes not guarantee that no crossings will occur for all step sizes below this step size h*\\u201d? How many times can a numerically integrated ODE alternate between non-crossing and crossing as step-size is decreased?\\n\\n-----------------\", \"re\": [\"3. Lady Windemere's Fan\", \"I realise after reading your comments that I'd misunderstood what Lady Windemere's Fan was (see my reply to your other comment). What I meant with my question was this:\", \"Suppose we have an ODE z\\u2019(t) = f(z). We could take equation (4) if we liked, but any ODE will do.\", \"Suppose we integrate it with a given method. At what threshold step-size does trajectory overlap cease to occur (and for all smaller step-sizes)?\", \"Suppose we train a neural ODE (and use the same method as above for the integration) to predict z(T) from z(0) for some specific time T. At what threshold step-size does test accuracy with step-sizes below this threshold cease to fall? E.g. we obtain 100% train accuracy when training on any step-size, but only with training step-size 10^{-3} does test accuracy with step-sizes less than 10^{-3} continue to be near 100%.\", \"How do these two threshold step-sizes compare?\"]}",
"{\"title\": \"Thanks for helpful clarifications\", \"comment\": \"Thanks for clarifying the points I was unsure on. The edits you've made in the updated version have cleared up the all questions I had.\", \"re\": \"Lady Windemere's fan.\\nI think the sentence \\\"However, in Figure 3 (a), the accumulation of error in the numerical solution (coined as Lady Windermere\\u2019s Fan in Hairer et al. (1993, x 1.7)) results in a valid feature for a linear decision (classification) layer\\\" threw me off. I mistakenly thought Lady Windermere's Fan was the numerical solution to equation 4, as opposed to the phenomenon of accumulation of error in the numerical solution to an ODE.\"}",
"{\"title\": \"Re: Juntang Zhuang\", \"comment\": \"Thank you for your interest in our paper. Indeed the work you mention is related to our work, specifically the\\nresults in Table 7. We will add [1] to our discussion, thank you for the suggestion. \\n\\nThe reason our model achieves much lower accuracy than [1] is due to the simple architecture of our model.\\nFor example the model in [1] consists of multiple ODE blocks, whereas our model consists of only a single ODE block.\\n \\n\\n[1] Zhuang, Juntang, et al. \\\"Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE.\\\"\", \"arxiv_preprint_arxiv\": \"2006.02493 (2020).\"}",
"{\"title\": \"Missing reference to a closely related work\", \"comment\": \"Hi, thanks for the nice work. We noticed that your work is closely related to the ICML paper [1], and would appreciate it if you could briefly discuss.\\n\\nIn the abstract, you said \\\"If the trained model is supposed to be a flow generated from an ODE, it should be possible to choose another numerical solver with equal or smaller numerical error without loss of performance\\\". To our knowledge, [1] is among the earliest to achieve this goal and have similar observation as in your work. Please see table 2 in the main paper of [1], and table 7 in the appendix of [1], where the ODE model is tested with different solvers without re-training and still achieve similar results.\\n\\nRegarding the accuracy on Cifar10, do you have any idea why your reported accuracy is below 60% in Fig 5? [1] has achieved over 90% test accuracy. Is it because of the model or the training? Thanks much in advance.\\n\\n[1] Zhuang, Juntang, et al. \\\"Adaptive Checkpoint Adjoint Method for Gradient Estimation in Neural ODE.\\\" arXiv preprint arXiv:2006.02493 (2020).\"}",
"{\"title\": \"Revision 1\", \"comment\": \"We have incorporated first improvements and uploaded a new draft of our paper.\", \"these_improvements_include\": \"- We worked on improving the clarity of the text in Section 2 specifically with regard\\nto Lady W.'s fan. We have also added ticks and tick labels to the axis of Figure 2. \\nReviewer 2 does this help you in regard to your Question 3? Additionally, we will try\\nto improve this Section further by introducing additional figures.\\n- We have improved the algorithm by using more precise notation\\n and we hope this makes it clearer how the proposed algorithm can\\nbe used in practice as suggested by Reviewer 3 and Reviewer 1. \\n- We have added the references suggested by Reviewer 3 to our related work Section.\\n- We fixed all the typos spotted by Reviewer 2, included the minor improvements suggested \\nby Reviewer 4 and fixed the algorithm numbering as noted by Reviewer 3.\\n- We changed the caption of Figure 1, as suggested by Reviewer 3.\\n\\nWe are looking forward to your feedback, and we will continue working on the improvements \\nwe promised (specifically improving the clarity of the figures, improving the Appendix, adding an\\nexample to answer Reviewer 2's Question 2 ).\"}",
"{\"title\": \"Re: Reviewer 1\", \"comment\": \"We thank the reviewer for their comments.\\n\\n1. **Adaptive step methods solve the problem:** We would like to thank the reviewer for this comment. We would like to point out that our work considers both adaptive and fixed step solvers (see Figure 4 (c) and (d) where adaptive step size methods where used for training the model.) Adaptive step size methods do not solve the problem described in the paper. Adaptive step size methods control *the local error* with a control signal typically specifying a desired number of significant digits. In this setting, the user has already done two things: run a preliminary analysis how many digits are required to likely get a qualitatively correct numerical solution and b) understood how much accuracy is required in the solution. In Neural ODEs, the required numerical accuracy depends on the downstream layers that change per gradient step. That is, the global error prerequisite of the numerical solver potentially changes with every gradient step. While there always exists a low enough tolerance such that the adaptation issue does not occur, this low enough tolerance may be prohibitively small in practice and is certainly leaving runtime efficiency on the table. We will try to make this point clearer in the paper.\\n2. We thank the reviewer for this suggestions and we will change the variable name to improve clarity.\"}",
"{\"title\": \"Re: Reviewer 3\", \"comment\": \"We thank the reviewer for their comments.\\n\\n1. **Maturity of proposed solution** We agree that the proposed solution is a first simple heuristic to approach the problem. It may not provide overwhelming success, but it does solve the problem. We hope that this helps the community to understand what the intrinsic flaw is all about and that it is solvable in general, thus leading to further research on this problem.\\n\\n We would like to understand in more detail why you did not find the algorithm convincing such that we can improve our work or add some discussion. We will fix the latex bug concerning the numbering of the algorithm - thank you for spotting this!\\n\\n2. We will adapt the algorithm such that it represents the full training loop.\\nWe would like to point out that the presented algorithm is not part of the solver but outside of\\nthe solver and can thus be applied to any solver.\\nWe would like to ask the reviewer to point out where we should provide additional clarification.\\n\\n3. The presentation of the adaption algorithm can indeed be improved. We will add additional details on the `calculate_accuracy_higher_order_solver()` function in the text and/or the appendix. We will also work on improving the overall clarity of the algorithm.\\n\\n4. We promise to improve the visualization on the experiments. We will do this by incorporating the suggestions by reviewer 4 minor comment 2. We are open to concrete suggestions on how to reduce the clutter in our figures.\\n\\nWe hope to have a first revision including the above changes by the weekend.\\nWe are currently running additional experiments and we will only update the figures\\nafter we have collected all results. This revision will take a bit longer.\"}",
"{\"title\": \"Re: Reviewer 2 Question and Clarification Requests\", \"comment\": \"## Questions and Clarification requests\\n1. We thank the reviewer for this question. For Figure 1 and 2, there are no true underlying dynamics.The Neural ODE model is only given the classification task and has to find some dynamics which solve the problem. In combination with the classifier, many different vector fields might be possible. We will clarify this in the main text.\\n\\n2. We thank the reviewer for pointing this out. The difference in the images is due to different scaling of the axis which were chosen such that the final position of all points is shown. We will add units to the axis of Figure 2 (a) and (b) to make this clearer.\\n\\n3. Our descriptions of Lady W.'s fan indeed needs further explanation - we will expand section 2.2. Lady W.'s fan does not refer to a specific problem, but how the local error gets accumulated into the global error. Lady Windemere's fan does not guarantee solving the XOR problem. But the example used in Section 2.2 is aimed towards showing that error accumulation can lead to dynamics which solve the XOR problem, even if the analytic solution to the ODE does not. The ODE we present in section 2.2. corresponds to a flow with increasing ellipsoids and an increase in the rotational speed for this problem. We discovered this model based on the knowledge that the precision of the solver influences how the rotational speed of the ellipsoids is resolved.\\n\\n4. \\\"Do you have any ideas of what directions you might head in in terms of regularising neural ODEs so that they manage to learn continuous semantics, even when trained at larger step-sizes?\\\" - We thank the reviewer for this interesting question. We do not have any precise ideas yet but restricting the Lipschitz constant of the Neural ODE to below 1 avoids crossing trajectories [1]. Additionally, forcing the model to learn simpler dynamics could reduce the critical step size (as done for example in [2]). We will add this discussion to the main text of the paper.\\n\\\"In particular, why did you go in the direction of an adaptive optimization algorithm, instead of, say, training the neural ODE with a randomly chosen step-size each iteration or even step?\\\" The idea of our algorithm is to keep the number of steps as small as possible. The idea of reviewer to use random step sizes is interesting. Random step sizes might not provide the wanted gradient information, as too large step sizes drive the system to discrete dynamics. Therefore, if the variance in the steps is too large, we believe that training might become difficult.\\n\\n 5. **Low accuracy on cifar:** The performance of our model is due to the simple architecture chosen for our experiments. To improve performance, other work uses a deeper classifier, an upstream downsampling block and often even multiple ODE blocks. We did not want to use an upstream block, a deeper downstream classifier block and multiple ODE blocks, as we want to maximize the contribution of the ODE block. We will add this to the explanation in the main text\\n\\n ## Typos and minor edits\\nThank you for spotting these and we will fix them.\\n\\n ## References\\n [1] Invertible Residual Networks, Behrmann et al., ICML, 2019\\n\\n [2] Learning differential equations that are easy to solve, Kelly et al., arXiv, 2020\"}",
"{\"title\": \"Re: Reviewer 2 Theoretical and empirical questions\", \"comment\": \"We would like to thank the reviewer for their detailed comments.\\n\\n1. **Number of crossings:** We thank the reviewer for this interesting question. We will get back to you if we find a solution.\\n\\n2. **Interplay of solver and vector field** We would like to work on answering this interesting question and therefore we ask the reviewer to please clarify a few details. Particularly, we kindly ask the reviewer to clarify what *analytical form of an ODE* is referring to. We believe that reviewer is either referring to that the analytical form of the right side of ODE f(z) is known or that the analytical solution to the ODE is known. We cannot think of a scenario where one would use a numerical solver for an ODE with known analytical solution, so we currently assume the reviewer considers the case of a right-hand side with known analytic properties.\", \"what_can_be_said_at_this_point\": \"Finding a step size h* for which no crossings occur does not guarantee that no crossings will occur for all step sizes below this step size h*. However, for each problem, there exists some sufficiently small step size h_s such that for all h < h_s, no crossings do occur. I.e., an empirical first occurence of no crossings is not a sufficient condition for small enough step size, but a small enough step size always exists for which even smaller step sizes will produce no crossings.\\n\\n Continuous right-hand sides are also not sufficient to eliminate the problem of trajectory crossings. We can construct a synthetic corner case highlighting these problems which we will add in the appendix (space permitting in the main text).\\n\\n3. **Lady Windemere's Fan:** We will provide additional explanations concerning Lady W.'s fan see also our answer\\nto question 3 below. We would like to know whether your question refers to the specific ODE presented in Eq. (4) or problems\\nwhere Lady W.'s fan can be observed in general? Note that Lady W.s fan is a problem *independent* of the trajectories crossing problem. In this case, valid ODE semantics are maintained, but the trained model still crucially depends on the discrete solver dynamics. We will try to make this clearer in the main text.\\n\\nWe hope to have a first revision including the above changes by the weekend.\\nThe addition of the synthetic corner case might take a bit longer, though.\"}",
"{\"title\": \"Re: Reviewer 4\", \"comment\": \"We thank the reviewer for their helpful comments.\\n\\n1. **Application to scientific problem:** We agree that it is currently unclear when a valid ODE interpretation would be required and that it might not be relevant to computer vision tasks yet. We believe that the Neural ODE model class---as opposed to ResNets---is only beginning to develop its full potential. For instance, [1] discusses that Neural ODEs as a model class might improve robustness. Therefore, we believe our work to be relevant beyond the immediate justification.\\n\\n That being said, we can try to run experiments, if you have a concrete application in mind?\\n\\n2. **Extended results:** The suggestions by the reviewer for extending the experiments with the step adaption algorithm are interesting. We agree that the step adaption algorithm should also work for the proposed cases.If time permits we will run additional experiments with the step adaption algorithm. We will start additional experiments for mnist and add them to the paper as soon as they are finished. We cannot promise these results until the end of the rebuttal but we hope that everything finishes in time.\\n3. Please see our answer to Reviewer 2 point 5.. Additionally, to improve the performance of a \\\"single\\\" ODE-block one could add additional augmented dimensions [2] (with the same number of parameters they achieve 60.6% on CIFAR-10).\\n4. We will work on reformatting the appendix and add additional explanatory text. \\\"preliminary algorithm\\\": We have called this algorithm preliminary as the results have been below our expectations compared to the relative success in the fixed step case. The problem of continuous vs. discrete semantics also appears for adaptive methods but the simple heuristic algorithm we present does not seem to work as well as expected for adaptive methods. We want to include the results for the tolerance adaptions algorithm nonetheless as we think they emphasize our overall findings. We are aware that the presented algorithm is only the first step of many in the direction of solving this problem and we hope that the community will continue to tackle this challenge. We will remove the word preliminary as it is unfitting as pointed out by the reviewer.\\n5. We thank the author for point us towards these publications and we will add these papers to our discussion.\\n6. We would like to point out that Reviewer 1 seems to find the title quite fitting. Nevertheless, we are open to suggestions. One option would be the title we chose for a previous version of this paper: \\\"When are Neural ODEs proper ODEs?\\\"\\n7. We are currently applying for clearance of the code from our institution. We hope to achieve this before the end of the discussion phase, but we promise to release the code eventually.\\n\\n## Minor comments\\n1. We thank the reviewer for pointing out this detail. We will add an explanation for the relationship between step size and number of steps in the introduction. We will also fix the wording on page 7 (number of steps -> number of iterations).\\n2. We will incorporate the helpful suggestions of the reviewer. We will increase the width of the figures and we will also add a legend to make the figures clearer.\\n3. We thank the reviewer for this suggestion and we will adapt the paper to use the word\\\"adaptation\\\".\\n\\nWe hope to have a first revision including all points of clarity by the weekend. A revision including improved figures and, resources permitting, experiments will take a bit longer.\\n\\n## References\\n[1] On Robustness of Neural Ordinary Differential Equations, Yan et al, 2020\\n\\n[2] Augemented Neural ODEs, Dupont, Doucet, Teh, 2020\"}",
"{\"title\": \"Re: all\", \"comment\": \"We thank the reviewers for their detailed and insightful comments. Reviewers 2, 3 and 4 seem to agree that our paper presents an important issue with high relevance to the Neural ODE community. Reviewers 2 and 4 raise concerns regarding the experimental coverage and the theoretical underpinning, leading to hesitation whether this submission should be accepted now or at a future conference. Reviewer 1 is skeptical about the generality of the presented problem.\\n\\nWe will try to address the reviewers' concerns within the rebuttal period and we will try to convince you that the work is indeed ready to be published now. To this end, we will improve the clarity and presentation and we will also try to add missing experiments.\\n\\nFor individual feedback, please refer to the comments below your respective reviews.\"}",
"{\"title\": \"Review of \\\"RESNET AFTER ALL: NEURAL ODES AND THEIR NUMERICAL SOLUTION\\\"\", \"review\": \"Paper summary:\\n\\nThe paper demonstrates how neural ODE models generating features for downstream tasks (or simply modelling trajectories) may rely on the discreteness of integration methods to generate features and thus fail in the exact ODE limit of integration step-size going to zero. The paper highlights particular failure modes, such as the discreteness of integration methods allowing for qualitative differences like overlapping trajectories (impossible for the exact solution of an autonomous ODE) compared to exact solutions, or quantitative differences like the accumulated error of a numerically integrated ODE resulting in useful features for downstream tasks. The paper empirically demonstrates the phenomenon that low training losses can be achieved for a range of integration methods and integration step-sizes, but that, of these models, the ones robust to changes in integration method and decreases in integration step-sizes at test time are those trained below a certain (empirically determined) integration step-size threshold. This is attributable to models trained with lower integration step-sizes maintaining features that are qualitatively the same as or quantitatively close to those features produced by the same model with smaller integration step-sizes. The paper proposes an algorithm for adapting integration step-size during training so that the resulting neural ODE model is robust to changes in integration method and integration step-size at test time. The algorithm is empirically demonstrated to achieve the same performance as grid search (for similar numbers of function evaluations).\\n\\n------------------------------------------\", \"strengths_and_weaknesses\": \"I liked the paper as it raised an important question of whether and when we should interpret neural ODEs as having continuous semantics and gave a few examples of failure cases. The results of the step-size adaptive algorithm were also promising (it matched grid search but with less work). Further, the paper was clearly written and easy to understand.\\n\\nHowever, as it stands, I\\u2019m assigning a score of 5. I like the paper and think that it would be a good workshop paper but is not ready for the main conference. The reason for this is that the theoretical part of the paper is mostly qualitative, whilst the experiments are not extensive enough to make up for the qualitative theoretical justification. If one of these two areas were to be improved, I would be happy to increase my score. To be concrete, here are examples of theoretical and empirical questions whose answers (just one would do) would increase the paper\\u2019s score for me:\\n\\n1)\\tHow can we mathematically describe when numerically integrated trajectories cross over in terms of the time over which the ODE is integrated and on the initial separation of the trajectories?\\n\\n2)\\tSuppose we are integrating an ODE for which we have the analytic form. Are there additional behaviours we need to watch out for? For example, after passing below a step-size where we transition from crossing trajectories to non-crossing trajectories, is it possible to transition back to crossing trajectories as we continue to decrease the step-size? Or can we rule out this case, for example, in the case of a f being continuous in the equation z\\u2019(t) = f(z)?\\n\\n3)\\tFor Lady Windermere\\u2019s Fan with the true dynamics, at what step-size does trajectory overlap cease to occur (assuming a minimum initial separation of trajectories and fixed time period)? And if we instead attempt to learn Lady Windermere\\u2019s Fan with a neural ODE, at what step-size does the neural ODE start to be robust against test-time decreases in step-size? How does this latter step-size compare to the former step-size? \\n------------------------------------------\", \"questions_and_clarification_requests\": \"1)\\tWhat was the true underlying model for figures 1 and 2?\\n\\n2)\\tWhy are the classifier decision boundaries different in figures 2a and 2b? I thought that you trained a neural ODE with h_train = 1/2 and then tested this model for both h_test = 1/2 and h_test = 1/4. \\n\\n3)\\tI didn\\u2019t understand the connection between Lady Windermere\\u2019s Fan and the XOR problem. Does running Lady Windermere\\u2019s Fan on R^2 with an XOR labelling lead to trajectory end points that are linearly separable? If so, how did you discover this?\\n\\n4)\\tYou mention at the end of section 2.2 that \\u201cThe current implementation of Neural ODEs does not ensure that the model is driven towards continuous semantics as there are no checks in the gradient update ensuring that the model remains a valid ODE nor are there penalties in the loss function if the Neural ODE model becomes tied to a specific numerical configuration.\\u201d Do you have any ideas of what directions you might head in in terms of regularising neural ODEs so that they manage to learn continuous semantics, even when trained at larger step-sizes? In particular, why did you go in the direction of an adaptive optimization algorithm, instead of, say, training the neural ODE with a randomly chosen step-size each iteration or even step?\\n\\n5)\\tWhy was the CIFAR-10 classification accuracy (~55%)? Previous work on neural ODEs has obtained accuracy in the 80 -95% range. Is this just due to the limited expressiveness of the upstream classifier, cf. \\u201cFor all our experiments, we do not use an upstream block f_u similar to the architectures proposed in Dupont et al. (2019). We chose such an architectural scheme to maximize the modeling contributions of the ODE block.\\u201d \\n\\n------------------------------------------\", \"typos_and_minor_edits\": [\"Write Initial Value Problem (IVP) on first usage of IVP.\", \"Fig.2 caption \\u2013 \\u201cThe model was trained \\u2026, we used \\u2026\\u201d -> \\u201cThe model was trained \\u2026, and we used \\u2026\\u201d\", \"Page 8, Conclusion, line 3 \\u2013 \\u201c\\u2026 an continuous\\u2026\\u201d -> \\u201c\\u2026 a continuous \\u2026\\u201d\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review for Neural ODEs and Their Numerical Solution\", \"review\": \"This paper empirically studies whether Neural ODEs have a valid ODE interpretation. The authors show that a Neural ODE model does not necessarily represent a continuous dynamical system if the discretization of the numerical method is too coarse. Indeed, this is a widely overlooked issue that has been largely ignored in the Neural ODE community. To address this issue, the authors propose a novel adaptive step size scheme.\", \"reasons_for_my_score\": \"Overall, I vote for marginally above acceptance threshold. The presented ideas and results are very interesting and relevant for the community. It is important to better understand if and when a Neural ODE has a valid ODE interpretation. I am willing to increase my score if the authors can address my concerns during the rebuttal period.\", \"pros\": [\"-------\", \"The work addresses a crucial aspect of Neural ODEs that is particularly important in the context of scientific and robotics applications. That is, because here one is often not only interested in the predictive accuracy of the model, but also whether the model has a valid ODE interpretation.\", \"The authors show several illustrating examples that demonstrate how the Neural ODE is affected by the specific solver configuration used for training.\", \"The adaptive step size scheme is simple yet effective. The experiments clearly demonstrate the advantage of this scheme.\"], \"cons\": [\"-------\", \"It would be good if you apply your adaptive step size scheme to an actual scientific problem, where it actually matters that the ODE has a valid ODE interpretation. It is not intuitive why an ODE interpretation is relevant for computer vision tasks.\", \"An extended set of results in Section 3.1 would help to better understand the performance of the proposed algorithm. For instance, how does the results change if you use RK4 for testing, instead of midpoint (this should only be marginally more expensive). Also, what happens if you train with midpoint and then use RK4 for testing. Next, can you also add results for MNIST or FMNIST in Table 1 in order provide an additional set of experiments.\", \"The Neural ODE block that you consider is very shallow. I assume, that it should be possible to achieve about 75% accuracy on CIFAR10 using a state of the art Neural ODE block.\", \"The Appendix is poorly formatted. Typically, I would expect that Figures are embedded into a descriptive text. It would be nice if you provide at least some discussion for why you provide these Figures and what we can learn from it (in addition to the captions). Also, it is not clear to me why you are presenting a `'preliminary tolerance adpation algorithm'. Are you proposing to use this algorithm, or is this an idea for a future work that still needs to be tested and improved? Typically, I would not expect to see any preliminary results in a conference paper.\", \"The authors miss to discuss how their works relates to some recent theoretical results [1,2,3].\", \"The title of the paper seems not fitting, i.e., what do mean by `ResNet after all'? You do not discuss ResNets in much detail in this paper.\", \"Please provide code in order to reproduce the results.\"], \"minor_comments\": \"-------\\n\\n* Some parts of the paper are unclear. For instance, the authors do not establish the relationship between step size and number of steps. This should be discussed somewhere below Eq. (3). On page 7 you say `that 'after a pre defined number of steps (we chose k= 50)'. I assume here you refer to the number of iterations?\\n\\n* Some of the Figures are crowded and difficult to parse. For instance, there is much going on in Figure 4. First, it would help if you increase the width of the plots (there is lot's of white space on the left and right of the figure). Further, it would help if you reduce the content slightly. Finally, a legend would be very much appreciated. \\n\\n* I think, it sounds better to use 'adaptation' instead of 'adaption'.\\n\\n\\nReferences\\n-----\\n\\n[1] Bo, Lijun, Agostino Capponi, and Huafu Liao. \\\"Deep Residual Learning via Large Sample Mean-Field Optimization: Relaxed Control and Gamma-Convergence.\\\" arXiv:1906.08894 (2019). \\n[2] Thorpe, Matthew, and Yves van Gennip. \\\"Deep limits of residual neural networks.\\\" arXiv preprint arXiv:1810.11741 (2018).\\n[3] W. E, J. Han, and Q. Li, \\\"A mean-field optimal control formulation of deep learning,\\\" Research in the Mathematical Sciences, vol. 6, no. 1, p. 10, 2019.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Good contribution but algorithm is not sound\", \"review\": [\"Paper makes a good contribution by pointing an intrinsic flaw in the NeuralODE technique. The problem is that even with an error accruing step size, the results of a NerualODE can be good, leading to a false belief that the ODE used in the construct represents the phenomena, but instead it is the dynamic behaviour arising from the mixture of the ODE and the solver that separates the classes well.\", \"However, the proposed solution does not seem convincing. It seems like a work in progress. The solution is proposed in algorithm 2,4,6 which should have been algorithm 1,2,3.\", \"It is not made clear how solvers would be able to use this algorithm.\", \"-The algorithm is not nicely constructed. Putting a function like \\\"calculate accuracy higher order solver();\\\" in an algorithm without fully describing what it does , is not advised.\", \"Figures are not illustrative, there is too much clutter. I believe a point could be made with the same amount of figures but with less clutter.\", \"Based upon the contribution made by the authors, it seems appropriate that their results are published right now.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"An Investigation of the discrete dynamics of Neural ODEs\", \"review\": \"**Summary**\\nThe authors show that Neural ODEs exploit the ODE-solver used for training to realize a dynamical system that violates the ODE vector field property of non-overlapping trajectories. The authors conclude that NODEs are not real ODEs, hence the paper's title \\\"ResNet after all.\\\". To avoid such behavior, the authors propose to monitor the accuracy metrics using a finer ODE solver and decrease the solver's step size if a discrepancy between the two different stepsize accuracies is observed.\\n\\n**comments** \\nWhile this paper's claims and experiments are relatively narrowly focused, the overall conclusion and proposed solution are clear.\\nHowever, I see a fundamental issue with the assumption of fixed stepsize solvers. The main reason why using adaptive stepsize solvers is to avoid such a problem of choosing the right stepsize. Consequently, I expect the described problem to be more elegantly\\nsolved using a lower relative tolerance value of a dynamic stepsize solver.\\n\\nIn algorithm 2, there is a variable called test_acc. The term \\\"test_acc\\\" is overloaded in this context, and it can refer to the test-set accuracy or the training accuracy under a higher-order ODE solver. If the authors refer to the training accuracy under a higher-order ODE solver (which I assume), please change the variable's name.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
2m0g1wEafh | Benefit of deep learning with non-convex noisy gradient descent: Provable excess risk bound and superiority to kernel methods | [
"Taiji Suzuki",
"Shunta Akiyama"
] | Establishing a theoretical analysis that explains why deep learning can outperform shallow learning such as kernel methods is one of the biggest issues in the deep learning literature. Towards answering this question, we evaluate excess risk of a deep learning estimator trained by a noisy gradient descent with ridge regularization on a mildly overparameterized neural network,
and discuss its superiority to a class of linear estimators that includes neural tangent kernel approach, random feature model, other kernel methods, $k$-NN estimator and so on. We consider a teacher-student regression model, and eventually show that {\it any} linear estimator can be outperformed by deep learning in a sense of the minimax optimal rate especially for a high dimension setting. The obtained excess bounds are so-called fast learning rate which is faster than $O(1/\sqrt{n})$ that is obtained by usual Rademacher complexity analysis. This discrepancy is induced by the non-convex geometry of the model and the noisy gradient descent used for neural network training provably reaches a near global optimal solution even though the loss landscape is highly non-convex. Although the noisy gradient descent does not employ any explicit or implicit sparsity inducing regularization, it shows a preferable generalization performance that dominates linear estimators. | [
"Excess risk",
"minimax optimal rate",
"local Rademacher complexity",
"fast learning rate",
"kernel method",
"linear estimator"
] | Accept (Spotlight) | https://openreview.net/pdf?id=2m0g1wEafh | https://openreview.net/forum?id=2m0g1wEafh | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"BrD2Jxo7eD",
"MBysw9N10T_",
"nSY44iyH26",
"agPBKqqj3L3",
"lQ5SBiiM9Oi",
"aXNIAs_4WDn",
"wCrgiRWlS1c",
"v54ncMSSqjl",
"XxZRbtaQBny",
"_HVFOHI0U1",
"UxGrEoBjcoJ",
"O5rJCVOfjZS",
"0-ZMqwo6vLn"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040483106,
1606214131227,
1605608562882,
1605608485914,
1605608441392,
1605607947154,
1605607913864,
1605607814111,
1605607581095,
1603925807361,
1603902745397,
1603713588757,
1603293406924
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3614/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3614/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3614/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3614/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3614/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3614/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3614/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3614/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3614/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3614/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3614/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3614/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Spotlight)\", \"comment\": \"This paper analyzes deep networks optimized using non-convex noisy gradient descent. The main result shows that in a teacher-student setting, the excess risk converges in a fast-rate and is stronger than any linear estimators (which include kernel methods). The paper also gives a convergence rate result that depends on some spectral gaps (which can be very small) but not on dimension. Overall the paper is interesting. It should probably emphasize that the dependency on spectral gaps (and the fact that they could be exponentially small) on the convergence as the current abstract suggests efficient convergence.\"}",
"{\"title\": \"Response\", \"comment\": \"Dear authors,\\n\\nthank you for you clarifications, the few concerns that I had were answered. Additionally I think the addition of the intuitions will help readers to appreciate the results more. Thus I will remain with my vote for accepting.\\n\\nBest Regards\"}",
"{\"title\": \"To all reviewers\", \"comment\": \"Thank you very much for your careful reading and suggestive comments.\\nWe have revised our paper according to your comments. The major changes are as follows:\\n1. We have added a remark to the definition of the linear estimator that states that $\\\\varphi_i$ can be any measurable function.\\n2. A intuition about why the minimax risk of the linear estimators is same as that on the convex hull of the target function class is added. \\n3. We have added a notion about the convergence rate of the algorithm, especially about the spectral gap.\\n4. We have added some missing important references. \\n\\nSincerely yours,\\nAuthors.\"}",
"{\"title\": \"Reply to Reviewer #1 (2)\", \"comment\": \"(This is a continuation of \\\"Reply to Reviewer 4 (1)\\\". We are sorry for the long reply.)\", \"additional_feedback\": \"Q. It would have been very useful if you could provide some intuition on why that is the case. \\nA. \\nIntuitively, since the linear estimator is linear to the observations $(y_i)_{i=1}^n$ of outputs, a simple application of Jensen's inequality yields that its worst case error on the convex hull of the function class $\\\\mathcal{F}$ does not increase compared with that on the original one $\\\\mathcal{F}$. Please look at Hayakawa & Suzuki (2020) for its rigorous proof. We have added this sentence in the revised version. \\n\\nQ. You show that the rate of the neural network is independent of the dimension. Do you have any intuition on why that is the case? \\nA. This is because, for each m, we only need to specify one parameter $(w_{1,m},w_{2,m})$ that have (d+2)-dimensional, and we do not need to estimate a linear combination of each neuron. However, a linear estimator should cover such a linear combination by their model.\\n\\nQ. Under Equation (5), instead of \\\"more faster,...,more faster\\\" write \\\"the faster,..., the faster\\\" \\nA. Thank you very much for pointing out this typo. We have fixed this in the revised version.\"}",
"{\"title\": \"Reply to Reviewer #1 (1)\", \"comment\": \"Thank you very much for your positive feedback and insightful comments.\\n\\nQ. It appears to me that the neural networks are not part of the linear functions class, and thus having a neural network target makes the linear functions being misspecified. Is that true? If so, does that play a role in the learning rate gap? \\nA. \\nThis interpretation is a little bit different from what is actually going on. Indeed, we can construct a model in which the neural network model can be included, for example, we can construct a kernel function for which the corresponding RKHS includes the neural network model. For example, if we set $k(x,x') = \\\\int \\\\sum_{m=1}^\\\\infty c_m \\\\sigma_m(w_{1,m}^\\\\top [x;1])\\\\sigma_m(w_{1,m}^\\\\top [x';1]) d \\\\nu_m(w_{1,m}),$ where $c_m$ is a positive constant with $\\\\sum_m c_m = 1$ and $\\\\nu_m(\\\\cdot)$ is a probability measure. Then, the neural network model is included in this RKHS (or at least, every element in the neural network model can be approximated by an element in the RKHS with any precision). Another example is a Sobolev space. Since the neural network model is smooth, all functions in the neural network model is included in a Sobolev space with an appropriate smoothness parameter. Therefore, it is easy to find a linear space that includes the neural network model, and thus the comparison with linear estimators including a kernel ridge regression is not completely unfair. However, as our theorem states, such an RKHS with a kernel function including ${\\\\mathcal{F}}{\\\\gamma}$ becomes unnecessarily large because it must cover any $\\\\sigma_m(w_{1,m}^\\\\top [x;1])$ with different $w_{1,m}$, then the convergence rate becomes slower. On the other hand, the neural network can appropriately pick up only one $w_{1,m}$ for each $m$, which makes the estimator much more efficient than kernel methods. This intuition can be mathematically formulated as the convex hull argument that we have used in the main text.\\n\\nQ. what is $\\\\varphi_i$? \\nA. \\nFirst, we would like to emphasize that the linear estimator admits \\\"any\\\" measurable (and L^2-integrable) function as $\\\\varphi_i$. You may choose anything you like. Of course, you can choose the \\\"best\\\" function as $\\\\varphi_i$ that would minimize the excess risk. One important example is a kernel ridge regression. Since the kernel ridge regression can be written as $\\\\hat{f}(x) = \\\\sum_{i=1}^n y_i ((K_X + \\\\lambda I)^{-1}\\\\mathbf{k}(x))_i$, then we can set $\\\\varphi_i(x_1,\\\\dots,x_n,x) = ((K_X + \\\\lambda I)^{-1}\\\\mathbf{k}(x))_i$ (we can check that the right hand side is a function of $(x_1,...,x_n,x)$). It is also possible to use such a kernel function that was introduced above.\\n\\nQ. Instead of noisy gradient descent you actually use semi-implicit euler scheme for optimization, do you have any thoughts on how that might effect actual performance? \\nA. \\nWe think its actual impact is quite marginal. In practice, we use a finite dimensional approximation ($W^{(M)}$) where the width $M$ is not so large (indeed M is less than the sample size $n$). In such a regime, the difference between those two schemes is small if we choose the step size $eta$ sufficiently small. Therefore, we think that the usual Euler scheme instead of the semi-implicit Euler scheme would work well in practice.\\n\\nQ. As far as I can see your current analysis does not hold for relu-activations, how easy might an extension to that be? \\nA. \\nYou are absolutely true. We think we might be able to extend the analysis to non-differentially activation functions such as ReLU because adding noise to the dynamics is equivalent to smoothing the objective function as shown by [R1]. This would be far from trivial, but we think it is possible. \\n\\n[R1] Bobby Kleinberg, Yuanzhi Li, Yang Yuan. \\\"An Alternative View: When Does SGD Escape Local Minima?\\\", Proceedings of the 35th International Conference on Machine Learning, PMLR 80:2698-2707, 2018. \\n\\n\\nQ. Are you aware of any lower bounds for the neural network case, are your rates optimal? \\nA. \\nThank you very much for bringing up an important issue. Unfortunately, the minimax optimal rate for the class $\\\\mathcal{F}_\\\\gamma$ is not known. More precisely, we have a rough lower bound $n^{-\\\\frac{\\\\gamma + \\\\alpha_1 + s\\\\alpha_2 + 1/4}{\\\\gamma + \\\\alpha_1 + s\\\\alpha_2 + 1/2}}$ so far, but we are not completely sure whether this is tight. We would like to defer deriving the minimax optimal rate as a future work.\"}",
"{\"title\": \"Reply to Reviewer #2\", \"comment\": \"Thank you very much for your positive feedback and suggestive comments.\\nWe reply to your specific comments one by one as follows.\\n \\n(1) Please look at the derived lower bound again. It is $n^{-\\\\frac{2\\\\tilde{\\\\beta} + d}{2\\\\tilde{\\\\beta} + 2d}}$ that includes the input dimension $d$ in the rate. Actually, if we increase $d$ to infinity, the exponent $\\\\frac{2\\\\tilde{\\\\beta} + d}{2\\\\tilde{\\\\beta} + 2d}$ converges to $1/2$ yielding the convergence rate $1/\\\\sqrt{n}$. \\n(2) Please note that $a_m$ is a fixed constant which we do not need to estimate. Moreover, what we need to estimate is {\\\\it not} the parameters itself but the function $f_W$. Therefore, identifiability of parameters is not required in our setting. Actually, in our proof, we are not showing the convergence of estimated parameter to a \\\"true\\\" parameter but we have shown only the convergence of estimated \\\"function\\\" to the true function $f_{W^*}$. \\n(3) Thank you for bringing up an important point. The noisy term is required to get out of a local optimal. Without the noisy term, the optimization dynamics can stack in a local minimum. Another role is that it makes the dynamics of the solution behaves as if it is generated from a Bayes posterior distribution, which enables us to analyze the convergence rate of the excess risk. \\n(4) We think it is more or less straight forward to extend the upper bound of the excess risk of neural network to a thin deep neural network. On the other hand, it would be more involved to derive a tight lower bound of the excess risk of linear estimators.\"}",
"{\"title\": \"Reply to Reviewer #3\", \"comment\": \"Thank you very much for your insightful comments.\\n\\nQ. In the teacher-student setting, what is the minimax rate for any estimator of instead of just linear estimators? Or is the upper bound for the noisy gradient descent method minimax optimal? \\nA. \\nThank you very much for bringing up an important issue. Unfortunately, the minimax optimal rate for the class $\\\\mathcal{F}_\\\\gamma$ is not known. More precisely, we have a rough lower bound $n^{-\\\\frac{\\\\gamma + \\\\alpha_1 + s\\\\alpha_2 + 1/4}{\\\\gamma + \\\\alpha_1 + s\\\\alpha_2 + 1/2}}$, but we are not completely sure whether this is tight. We would like to defer deriving the minimax optimal rate as a future work. \\n\\nQ. Traditionally we impose smoothness assumption on the target function directly (e.g. Holder space). So what is the main advantage of this teacher-student setting? \\nA. \\nYes, the Holder/Sobolev space is a typical setting to characterize the convergence rate via the smoothness. However, the geometry of these typical spaces are not non-convex and there does not hardly appear difference between deep and shallow. On the other hand, using the teacher-student setting induces more sparsity which plays an important role to obtain a non-convex geometry of the model. This is related to the feature extraction structure of neural network. Via the feature extraction ability, the neural network focuses more on a specific part of the input that induces non-convexity. On the other hand, the Holder space directly uses the whole information of the input and its smoothness is uniform over all input $x$, which results in a convex model. \\n\\nQ. In Theorem 1, I feel a little bit confused why the dimension d also appears in the numerator, which is different from classical lower bounds. For example, If we assume that f^o belongs to a Sobolev space of order s, then the minimax rate of excess risk will be $n^{-2s/(2s+d)}$, which goes to 0 as goes to infinity. Do I have any misunderstanding? \\nA. \\nThis is indeed a good point. The convergence rate on the Sobolev space indicates that the complexity of the space is much more sensitive to the input dimension $d$ than the neural network model. Actually, the deep learning approach is not affected by the curse of dimensionality. The phenomenon that $d$ appears in the numerator of the minimax rate can be found in the lower bound of linear estimators in the Besov space too. For a Besov space $B^\\\\beta_{p,q}$ with $\\\\beta > d(1/p - 1/2)$ with $p < 2$, the lower bound of linear estimators is $n^{-\\\\frac{2(\\\\beta - d(1/p - 1/2))}{2(\\\\beta - d(1/p - 1/2)) + d}} = n^{-\\\\frac{2 s + d}{2 s + 2d}}$ where we set $s = \\\\beta - d/p$. In this case, the dimension $d$ appears in the denominator. By carefully looking at the proof, we can notice that the role of $s$ and $\\\\tilde{\\\\beta}$ are the same. Therefore, we think our lower bound is natural.\"}",
"{\"title\": \"Reply to Reviewer 4 (2)\", \"comment\": \"(This is a continuation of \\\"Reply to Reviewer 4 (1)\\\". We are sorry for the long reply.)\\n\\nQ. Besides, I feel that the comparison with Raginsky et al., 2017 and Erdogdu et al., 2018 after Proposition may not be fair. \\nA. As we have remarked above, the convergence of the infinite dimensional version is guaranteed due to the existence of regularization term. Roughly speaking, the regularization term makes the analysis similar to a finite dimensional one. However, it is far from trivial. Actually, the dependency of the step size \\\\eta is changed from $O(\\\\eta)$ to $O(\\\\eta^{1/2-a})$ in exchange for enabling the infinite dimensional analysis. In that sense, the assumption is different but we think it is quite important to see the connection between finite dimensional analysis and infinite dimensional one.\\n\\nQ. Moreover, it may not be appropriate to state that gNGD achieves a fast convergence rate, it seems that the spectral gap $\\\\Lambda^*$ still has an exponential dependency on the parameter beta (shown in Proposition 3). \\nA.\\nWe guess the terminology \\\"fast convergence rate\\\" was a bit confusing. We used this terminology to indicate the convergence rate of the \\\"statistical convergence rate\\\" of the excess risk with respect to the sample size $n$. We did not intended to indicate the algorithmic convergence rate of NGD with respect to the number of iterations $k$. As you pointed out the spectral gap $\\\\Lambda^*_\\\\eta$ depends on the parameter $\\\\beta$ exponentially. This is stated in the definition of the spectral gap, but since this paper's focus is more on the statistical analysis, we did not mention it explicitly due to the space limitation. On the other hand, we realized that to avoid the confusion you pointed out, we had better to add one sentence which explicitly shows this point. Accordingly, we have added such a sentence in the revised version right after Proposition 1.\\n\\nQ. In Theorem 2, can we view the expected error between F_{W_k^{(M)}} and f^{\\\\circ} as a variant of generalization error (in expectation)? If this is the case, can we somehow apply the results in the following paper, and obtain a O(n^{-1}) generalization error bound for the Langevin dynamic gradient algorithm (if considering finite dimension case)? \\nA. Thank you for pointing out an important issue. We cannot directly obtain our excess risk bound from the generalization error bound ($L(\\\\hat{f}) - \\\\hat{L}(\\\\hat{f})$). As you noticed, the excess risk can be bounded by using the relation $L(\\\\hat{f}) - L(f^\\\\circ) = (L(\\\\hat{f}) - \\\\hat{L}(\\\\hat{f})) + (\\\\hat{L}(\\\\hat{f}) - \\\\hat{L}(f^\\\\circ)) + (\\\\hat{L}(f^\\\\circ) - L(f^\\\\circ))$ and the fact that the second term of the right hand side can be small for $\\\\hat{f}$ with small training error and the third term can be bounded by the Hoeffding's inequality. However, this approach *does not* yield our excess risk bound. Instead, we have used a variant of the *local* Rademacher complexity technique (we need several modification to deal with the Bayes estimator). This analysis utilizes the strong convexity and smoothness of the squared loss that enables us to utilize a fact that an estimator $\\\\hat{f}$ with small excess risk is close to the true function $f^\\\\circ$. By using this, we can obtain a faster learning rate and, consequently, we can compare the convergence rate.\\n\\nQ. A suggestion: It is better to present Section B.1 before the first part of Section B, since the proof of Proposition 1, Theorem 2, and Corollary 1 largely rely on the assumptions and propositions in Section B.1. \\nA. Thank you for your careful reading and insightful suggestion. We have changes the order of the sections following your suggestion.\\n\\nQ. Lastly, the authors may also want to include the following two NTK papers in the introduction section. \\nA. Thank you for your suggestion. We could not cite several important references due to the space limitation. We have included the references that you have suggested in the revised version.\"}",
"{\"title\": \"Reply to Reviewer 4 (1)\", \"comment\": \"Thank you very much for your insightful comments.\\n\\nQ. The network function is different from the commonly-used one. What if using a standard parameterization of a two-layer network but performing projected (noisy) gradient descent? \\nA.\\nThe projected gradient descent is a proximal gradient descent with an indicator function, but the current theory of the infinite dimensional gradient Langevin dynamics does not cover a non-differentiable and unbounded objective. Thus, it is difficult to rigorously show its convergence in that setting. However, by the analogy from the finite dimensional analysis, it should not affect so much to the result in practice.\\nA more critical assumption in our analysis is that the scaling factors $a_m$ and $b_m$ converge to 0 as $m \\\\to \\\\infty$. This makes the model include a function with high frequency components that leads separation between deep learning and linear estimators.\\n\\nQ. It seems that the goal of all estimators is to recover the teacher networks. However, the Bayes estimator actually uses the same network structure as the teacher one. It is more interesting to investigate the case where the underline teacher network is independent of the learned network. \\nA.\\nWe agree with your opinion that a setting where a student model is different from the teacher model would be more interesting. On the other hand, the teacher-student model is quite basic and commonly used in the literature. In that sense, we consider that it is natural to discuss optimality of estimators on the teacher-student model. We may find a better neural network based estimator that outperforms the estimator we considered in the paper, but it does not contradict a fact that any linear estimator suffers from a curse of dimensionality but an appropriate deep learning approach does not. We think showing this fact in a simplest setting (such as teacher-student model) would be quite important in the literature.\\n\\nQ. In the description of hat f, the authors may need to clearly state the definition of the function \\\\phi_i. \\nA. \\nWe would like to emphasize that the definition of a linear estimator admits {\\\\it any} $\\\\varphi_i$. The only assumption for $\\\\varphi_i$ is that it is measurable with respect to $x$ and $x_1,\\\\dots,x_n$. Our lower bound of the excess risk of linear estimators is applicable uniformly to all estimator that has the form of $\\\\hat{f}$ (of course, that includes the kernel ridge regression). Therefore, you may consider the \\\"optimal\\\" choice of $\\\\varphi_i$ that would \\\"minimize\\\" the excess risk in some sense. Since we do not need to restrict the shape of $\\\\varphi_i$, we think that the result of Theorem 1 is a strong statement. We have added a note that explicitly tells $\\\\varphi_i$ can be any measurable function right after the definition of the linear estimator.\\n\\nQ. Does the result in Theorem 2 hold for any f^{\\\\circ}? \\nA. Yes, the result holds for any $f^\\\\circ \\\\in \\\\mathcal{F}_\\\\gamma$ uniformly.\\n\\nQ. In Proposition, the convergence results look similar to the following paper, while in their paper, the right-hand side of (6) converges to $O(\\\\eta^{1/2})$ when $k\\\\eta$ goes to infinity. \\nA.\\nWe guess you intended $O(\\\\eta)$ in their paper. If so, this is because of infinite dimensional setting and the regularization term. In our setting the regularization term is $\\\\|W\\\\|^2_{\\\\mathcal{H}1} = \\\\sum_m (w_{1,m}^2 + w_{2,m}^2)/k^2$, but if we employ $\\\\|W\\\\|^2_{\\\\mathcal{H}{p/2}} = \\\\sum_m (w_{1,m}^2 + w_{2,m}^2)/(k^p)$ for $p > 1$ then that term should be $\\\\eta^{(p-1)/p}$. On the other hand, the finite dimensional setting corresponds to taking the limit of $p \\\\to \\\\infty$ which leads to $\\\\eta^{(p-1)/p} \\\\to \\\\eta$ that recovers the finite dimensional result ($O(\\\\eta)$) as shown by Xu et al. (2017).\"}",
"{\"title\": \"Interesting work but requires more refined explanations and discussions\", \"review\": \"This paper is theoretical sound and well organized. This paper shows that the Bayes estimator with Gaussian prior can outperform the linear estimators (including kernel regression and k-NN), which I believe is indeed interesting and important.\\n\\nBesides, I would like to raise the following comments and questions.\\n\\n1. The network function is different from the commonly-used one. For example, the authors need to clip the output weights using tanh function. The authors state that the reason is to ensure the boundness condition of the network function. What if using a standard parameterization of a two-layer network but performing projected (noisy) gradient descent?\\n\\n2. It seems that the goal of all estimators is to recover the teacher networks. However, the Bayes estimator actually uses the same network structure as the teacher one. It is more interesting to investigate the case where the underline teacher network is independent of the learned network (e.g., using an overparameterized network to learn a smaller network).\\n\\n3. In the description of hat f, the authors may need to clearly state the definition of the function \\\\phi_i.\\n\\n4. Does the result in Theorem 2 hold for any f^{\\\\circ}?\\n\\n5. In Proposition, the convergence results look similar to the following paper, while in their paper, the right-hand side of (6) converges to O(\\\\eta^{1/2}) when k\\\\eta goes to infinity. Could you briefly discuss why in this paper, this quantity is in the order of O(\\\\eta^{1/2-a})?\\n\\nXu, Pan, et al. \\\"Global Convergence of Langevin Dynamics Based Algorithms for Nonconvex Optimization.\\\" arXiv preprint arXiv:1707.06618 (2017).\\n\\n6. Besides, I feel that the comparison with Raginsky et al., 2017 and Erdogdu et al., 2018 after Proposition may not be fair. In particular, the convergence results in these papers are derived based on different assumptions, thus their dependencies on the dimension are not directly comparable. \\n\\n7. Moreover, it may not be appropriate to state that \\u201cNGD achieves a fast convergence rate\\u201d, it seems that the spectral gap \\\\Lambda^* still has an exponential dependency on the parameter beta (shown in Proposition 3), which will be set as beta = \\\\Theta(n) in Theorem 2. This implies that the noisy gradient descent may require exponential time to output a good solution.\\n\\n8. In Theorem 2, can we view the expected error between F_{W_k^{(M)}} and f^{\\\\circ} as a variant of generalization error (in expectation)? If this is the case, can we somehow apply the results in the following paper, and obtain a O(n^{-1}) generalization error bound for the Langevin dynamic gradient algorithm (if considering finite dimension case)? \\n\\nMou, Wenlong, et al. \\\"Generalization bounds of SGLD for non-convex learning: Two theoretical viewpoints.\\\" Conference on Learning Theory. 2018.\\n\\n9. A suggestion: It is better to present Section B.1 before the first part of Section B, since the proof of Proposition 1, Theorem 2, and Corollary 1 largely rely on the assumptions and propositions in Section B.1.\\n\\n10. Lastly, the authors may also want to include the following two NTK papers in the introduction section. \\n\\nZou, Difan, et al. \\\"Gradient descent optimizes over-parameterized deep ReLU networks.\\\" Machine Learning 109.3 (2020): 467-492.\\n\\nCao, Yuan, and Quanquan Gu. \\\"Generalization bounds of stochastic gradient descent for wide and deep neural networks.\\\" Advances in Neural Information Processing Systems. 2019.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #3\", \"review\": \"Summary:\\n\\nThe paper aims to demonstrate the superiority of deep learning methods against kernel methods by comparing their excess risk bounds. In particular, the authors first derive the minimax lower bound for linear estimators by assuming that the target function can be represented by a teacher neural network, which implies that the linear estimators suffer from curse of dimensionality. Then the authors further derive a dimension-independent upper bound of the noisy gradient descent method for overparametrized two-layer neural networks, which theoretically confirms the benefit of deep learning methods in terms of convergence rates. The paper is well written and also interesting to read. Overall, I vote for accepting.\", \"concerns\": \"1. In the teacher-student setting, what is the minimax rate for any estimator of $f^0$ instead of just linear estimators? Or is the upper bound for the noisy gradient descent method minimax optimal?\\n2. Traditionally we impose smoothness assumption on the target function directly (e.g. Holder space). So what is the main advantage of this teacher-student setting?\\n3. In Theorem 1, I feel a little bit confused why the dimension $d$ also appears in the numerator, which is different from classical lower bounds. For example, If we assume that $f^0$ belongs to a Sobolev space of order $r$, then the minimax rate of excess risk will be $n^{-\\\\frac{2r}{2r+d}}$, which goes to 0 as $d$ goes to infinity. Do I have any misunderstanding?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Comments to \\\"Benefit of deep learning with non-convex noisy gradient descent: ...\\\"\", \"review\": \"#### General comments\\nThis paper aims at proving superiority of neural network models to any linear estimators, including kernel methods. To attain this purpose, this paper focuses on two layer neural network class with an infinite width. For the non-parametric regression models within this neural network class, this paper establishes a sharp excess risk error of the least square methods with noisy gradient descent update, although such optimization may be heavily non-convex. Moreover, a lower bound of all linear estimators under the $L_2$-norm are accordingly given when the true function is within the two layer neural network class, thereby showing superiority to kernel methods. Overall\\uff0cthe contribution of this paper is obvious and the literature review is full to some extent.\\nThis paper is organized well and stated clearly. \\n\\n#### Specific Comments\\n\\uff081\\uff09After Theorem 1, the sentence \\\"for relative high dimensional settings, this lower bound becomes close to a slow rate $\\\\Omega(1/\\\\sqrt{n})$, which corresponds to the curse of dimensionality. \\\" I argue that this sentence may be uncorrected, \\nsince the mentioned rate is independent of the input dimension, which is not a real curse of dimensionality. \\n(2) A constraint on $f_{W}$ should be added, otherwise, it is impossible to identify $a_m$ and $\\\\bar{w}_{2,m}$ simultaneously. \\n(3) What is the role of noisy term in NSGD algorithm, is it a similar conclusion when the standard SGD is applied? \\n(4\\uff09What is the additional difficulty encountered when analyzing a thin but deep neural network?\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting and to the point results, technically very demanding, so difficult to verify correctness.\", \"review\": \"=========================\", \"summary\": \"The paper shows that a two-layer neural network (although an extension to deeper models seem unproblematic) may outperform a class of linear functions in terms of the excess risk learning rate, and in a minimax optimality analysis, and when approximating a target function from the neural network class. The paper essentially shows that linear functions have a problem with the non-convexity of the neural network class, and approximate the slow rate of 1/(n)^(1/2) for increasing dimension. A neural network trained with noisy stochastic gradient descent on the other hand has a faster rate, depending on several parameters.\\n\\n=========================\\n\\n=========================\", \"pros\": [\"Well written and polished paper.\", \"Technically sounds as far as I can tell. (Randomly checked some parts in more detail.)\", \"Setting and results may be interesting for a large audience.\", \"Main results and message of the paper are to the point.\", \"=========================\", \"=========================\"], \"cons\": \"Very technical and on some parts I would have liked some more intuition and discussion. See detailed feedback and also questions for rebuttal.\\n\\n=========================\\n\\n=========================\", \"scoring\": \"Overall I think this is a worthwhile contribution in understanding the difference in deep and shallow learning, and as the paper is very sound I will vote for accept. I will acknowledge, however, that there is a flurry of related work, as it is a very popular topic, and I can not vouch for the novelty of this contribution. The authors, however, covered much ground in that regard.\\n\\n=========================\\n\\n=========================\", \"questions_for_rebuttal\": \"It appears to me that the neural networks are not part of the linear functions class, and thus having a neural network target makes the linear functions being misspecified. Is that true? If so, does that play a role in the learning rate gap? In case it is not true, what is the essential difference then between the linear functions and the neural networks? Regarding that, what is phi_i in the definition of linear models?\\n\\nInstead of noisy gradient descent you actually use semi-implicit euler scheme for optimization, do you have any thoughts on how that might effect actual performance?\\n\\nAs far as I can see your current analysis does not hold for relu-activations, how easy might an extension to that be?\\n\\nAre you aware of any lower bounds for the neural network case, are your rates optimal?\\n\\n=========================\\n\\n=========================\", \"additional_feedback\": \"The result that the minimiax rate of linear functions over a space F is the same as over its convex hull was not known to me. For me it would have been very useful if you could provide some intuition on why that is the case.\\n\\nYou show that the rate of the neural network is independent of the dimension. Do you have any intuition on why that is the case?\\n\\nUnder Equation (5), instead of \\\"more faster,...,more faster\\\" write \\\"the faster,..., the faster\\\"\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
mxfRhLgLg_ | Deep Ecological Inference | [
"Nic Fishman",
"Colin McAuliffe"
] | We introduce an efficient approximation to the loss function for the ecological inference problem, where individual labels are predicted from aggregates. This allows us to construct ecological versions of linear models, deep neural networks, and Bayesian neural networks. Using these models we infer probabilities of vote choice for candidates in the Maryland 2018 midterm elections for 2,322,277 voters in 2055 precincts. We show that increased network depth and joint learning of multiple races within an election improves the accuracy of ecological inference when compared to benchmark data from polling. Additionally we leverage data on the joint distribution of ballots (available from ballot images which are public for election administration purposes) to show that joint learning leads to significantly improved recovery of the covariance structure for multi-task ecological inference. Our approach also allows learning latent representations of voters, which we show outperform raw covariates for leave-one-out prediction. | [
"ecological inference",
"representation learning",
"multi-task learning",
"bayesian deep learning"
] | Reject | https://openreview.net/pdf?id=mxfRhLgLg_ | https://openreview.net/forum?id=mxfRhLgLg_ | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Mwoc-3KJJk",
"26AICUAMRCQ",
"QTloYv5Y6l1",
"Clm8HLOAIB2",
"LAfj7eL7Fc7"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040512959,
1603952681230,
1603889798880,
1603862004379,
1603680246898
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3608/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3608/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3608/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3608/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper is very interesting and timely, but as the reviewers note there is significant room for improvement in the clarity of the presentation and evaluation. In addition to the references mentioned by the reviewers, some other relevant references are the following:\\n\\n[1] Evan Rosenman, Nitin Viswanathan, \\\"Using Poisson Binomial GLMs to Reveal Voter Preferences,\\\" https://arxiv.org/abs/1802.01053\\n\\n[2] Law, H. C. L., Sutherland, D., Sejdinovic, D., & Flaxman, S. (2018, March). \\\"Bayesian approaches to distribution regression.\\\" In International Conference on Artificial Intelligence and Statistics (pp. 1167-1176).\"}",
"{\"title\": \"Interesting problem/approach but the paper lacks details and is difficult to follow\", \"review\": [\"#### Summary\", \"The paper discusses an interesting direction to efficiently approximate the loss function in the ecological inference problem, which enables extensions using linear models, deep neural networks, and Bayesian neural networks. The proposed approach was evaluated using Maryland 2018 midterm elections data on a range of tasks.\", \"#### Strengths\", \"The paper tackles one important and practical problem of ecological inference: inferring labels from label proportions, which is applicable to a lot of settings, one of which is \\\"voting\\\" as studied in the paper\", \"The paper discusses an interesting direction to approximate the loss function of the ecological inference problem in an efficient manner, which enables different extensions, especially using Bayesian neural networks.\", \"The paper evaluates the proposed model using real Maryland 2018 midterm election data and produces interesting insights\", \"#### Weaknesses\", \"The paper is not easy to follow. Apart from various typos (see details below), I think the structure of the paper could be improved significantly to make it more accessible. For example:\", \"(1) Poisson binomial/multinomial losses were not introduced early in Section 1, which makes it hard to follow and understand the \\\"Contributions\\\" described at the end of Section 1\", \"(2) Although described in text in Section 1, it's still pretty unclear what the input data are. I'd suggest discussing the input data formally at the beginning of Section 2 before describing the techniques in details\", \"(3) It's very unclear what the evaluation tasks are (especially for people who are not familiar with the data and/or domain) and the intuitions behind why the tasks are suitable to evaluate the effectiveness of the proposed methods\", \"The paper lacks details on how the proposed methods (and baselines) are implemented. In addition, there are various baselines/methods included in the \\\"results\\\" section but it's unclear what they are in details.\", \"In addition, there are various typos / minor writing problems. Here are some of them:\", \"Sec 3: \\\"This gives us a useful test case for examining not correlations in voting patterns between races.\\\" \\\"not correlations\\\"?\", \"Sec 3: \\\"We demonstrate two practical use cases for out methods. \\\" -> \\\"We demonstrate two practical use cases for our methods. \\\"\", \"References in many places are without parentheses\", \"poisson -> Poisson\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting preliminary work, but insufficient experiments/analysis\", \"review\": \"This paper takes an approach to ecological inference inspired by deep learning. Ecological inference is the problem of learning individual labels when only large sets of aggregated data are available. It requires a way to estimate label propensities as a function of covariates. This paper proposes combining a multi-level model with deep learning to estimate voter propensities. The model is then applied to Maryland 2018 midterm election data, and is validated with demographic-level polling data (treating the polling data as ground truth) and with known vote correlations.\\n\\nSpecifically, the paper makes the assumption that distribution of vote counts for a precinct comes from a multinomial distribution, where global probability is averaged across all voters eligible to vote in the precinct (or to a binomial with the same probability averaging). The remaining problem is modeling these probabilities from the individual level covariates. The authors look at 3 models: a linear model, a feed-forward neural network, and a multi-level model where the varying slope coefficients are the output of a feed-forward neural network that takes as input an individual's covariates. The authors consider both frequentist and Bayesian versions of these models. In terms of experimental analysis, the paper argues: the deep models recover the closest demographic splits to the polling estimates, jointly training models across races recover vote covariance structure similar to the truth, and that representations from deep models are useful for few-shot prediction.\\n\\nI think it's important to connect fields like ecological inference and deep learning. This paper is a solid attempt to combine these disparate fields. The datasets it uses for its experiments are interesting and should be of value to the ML community. \\n\\nThe main problem with the paper is that the experiments are incomplete, unclear, and unconvincing of the significance of the proposed models. For one, the paper doesn't compare to a standard MLM as a baseline -- only the deep MLM, proposed in this paper. How do we know the improvements of a Deep MLM wouldn't be present in the standard method?\\n\\nBecause the main results of this paper are applied, strong analysis is crucial. A high-level issue is that the analysis doesn't go deep enough. There are claims like the following: the deep multi-level model \\\"leads to more reasonable estimates of partisan crossover voting, that is Republican voters voting for Democratic candidates and vice versa, than the dense network, which tends to underestimate the degree of crossover relative to what is found in survey data\\\". Why is this true? Can we interpret the results of the deep MLM that show why this behavior is happening? This is another example where training the regular MLM would be beneficial, so we could compare them and isolate the effects of deep learning. A similar claim is: \\\"Crossover is essentially a matter of capturing the interaction of smaller effects which affect crossover with that of a vary large effect, namely partisanship. We conjecture that the inductive biases of our architecture facilitate the estimation of such interactions.\\\" Can we quantitatively show what's happening instead of conjecturing?\\n\\nMoreover, the model that has the best test set R^2 is the standard linear regression. This is depicted in Table 1 but not discussed in the paper. It is not a make-or-break result, but the paper would benefit from explicitly answering the question: how can we reconcile the fact that the baseline has the best heldout performance but the proposed models perform best on the other tasks?\\n\\nThe paper also omits many key experimental details, which would hamper the reproducibility of results. How are train/test splits created? Are they on the precinct-level? What percent of the data/precinct are in each split? How many hidden units -- and which nonlinearities -- are used for the neural networks? What are the latent features used for the few-shot learning experiment? Is it just the beta's? The beta's concatenated with alphas? Or the entire logits? How do the experiments use the binomial loss for third-party candidates? Is it 1/0 Democrats/not Democrats? \\n\\nSome smaller comments about the graphs/tables:\\n * Where is the mean MSE for the dense baseline in Figure 2C?\\n * Why is there no LOO for the 0 hidden layer input for binomial R^2 in Figure 3, i.e. the model input? We only see it for the multinomial. \\n * Why is LOO R^2 for baseline dense missing in Table 1?\\n * I would remove Figure 1A to clear up space for analysis. Currently, the figure only appears to be showing the fit to the training data, which is not a very salient piece of information.\\n\\nOn a more minor note, there were quite a few typos throughout. Some examples:\\n * \\\"represents the candidate the selected\\\" on page 2 [should remove \\\"the\\\"]\\n * \\\"This gives us a useful test case for examining not correlations in voting patterns between races\\\" on page 4 [should remove \\\"not\\\"]\\n * \\\"We demonstrate two practical use cases for out method\\\" on page 4 [\\\"out\\\" should be \\\"our\\\"]\\n * \\\"various depths. performance of the latent space for predicting survey responses.\\\" on page 7 [second sentence is incomplete and not capitalized]\\n * in general, \\\"R\\\" in \\\"Republican\\\" should be capitalized\\n\\nIn summary, this is an interesting idea and an important research area, but for now it is preliminary work due to incomplete experiments/analysis.\", \"pros\": [\"Important research area (combining political science with machine learning)\", \"Valuable datasets introduced to community\", \"Model proposed is intuitive\"], \"cons\": [\"Experiments missing key analysis and baselines\", \"Experiments aren't reproducible\", \"Clarity could be improved, typos throughout.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This is an interesting and well-motivated paper that uses techniques from deep learning to generete approximations for the ecological inference problem on voting data. This paper also uses voter file data as an underlying data source, which allows for additional novel insights.\", \"review\": \"This paper proposes a deep learning framework for approximating ecological inference for estimating voting propensities based on demographic aggregates. This is an important problem, as EI has become a court standard for evaluating racially polarized voting in gerrymandering cases for the Gingles factors. Additionally, the increased attention on building coalition districts and availability of individual level data means that this is a problem that is likely to have a large impact in the next redistricting cycle that begins next year.\\n\\nThe proposed methodologies seem natural once the approximation is constructed and this analysis explores some potential ways to incorporate it into various learning architectures. Additional work could be devoted to optimizing over the choices of hyperparameters and providing additional guidance about ways to choose which model would be appropriate based on available input data, since not all applications of these methods will have access to the full sets of surveys and validation measures that were available here. It would be nice to see the performance of these methods on some synthetic data as well and at least one comparison to one of the current state of the art methods on an aggregate version of the data would be useful. \\n\\nOverall, this paper is interesting and presents an approximation that is likely to be useful in practice for real world problems and given the space constraints appears to present sufficient work to be publishable.\", \"a_couple_of_typos\": \"Last sentence of paragraph 1 in Section 3 `not correlations' seems like a misnomer\\n\\nEnd of caption 1, missing close paren.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Official Review\", \"review\": \"Summary:\\nThis paper uses deep neural networks into the ecological inference problem and shows its effectiveness by using large-scale datasets of the Maryland 2018 midterm elections. Some experimental results have been shown.\\n\\nPros.\\n1. Ecological inference is an important problem in political science for modeling individual-level voting behavior given only aggregate-level data.\\n2. The authors attempt to apply the recently advanced technique (I.e., DNN) to the ecological inference problem.\\n3. This paper provides a case study that analyzes real-world voting behavior by using various kinds of datasets.\", \"cons\": \"1. The proposed approach is not new.\\n2. Related work is not adequately cited.\\n3. This paper is not well-written, especially the Experiment section is not readable.\", \"reasons_for_score\": \"I think the task that the authors address is interesting and important. However, this paper does not present new technical contributions and related works are not listed adequately. The authors attempt to provide the case study of the Maryland 2018 midterm elections via deep learning approaches in ecological settings. But, the Experiment section is not well-organized, and I did not find insightful results from this manuscript. Accordingly, my opinion is that this paper is not ready for publication.\", \"detailed_comments\": \"1. Important references are missing. So, It is not clear how far the proposed method has pushed the boundary of existing technology. Firstly, as the authors mentioned, the ecological inference is strongly related to the problem \\u201clearning with label proportions (LLP)\\u201d, the authors should add more discussion. For example, in [R1], the ecological inference problem has been formulated as LLP. Also, ecological inference relates to other problems discussed in the machine learning community. The common aim is to learn the individual-level models from only aggregate-level data; for example, distribution regression [R2], multiple-instance learning [R3, R4], and collective graphical models [R5, R6]. It would be a great idea to survey and discuss the relationships with these methods.\\n2. In the approximation procedure for the loss function, the assumption is not clear. Is that \\u201ceach individual adopts one of the candidates based on the shared probabilities that is the average over $N$ individuals\\u201d? If that is the case, the loss is the logarithm of multinomial distribution (i.e., Eq. (4)); this is equivalent to that has been used in the Collective Graphical Models (e.g., [R6]). \\n3. Section 3.2 is not readable. Does each paragraph in Section 3.2 correspond to which Figure? What is Figure 3.2? \\n4. In experiments, the authors should specify the experimental settings. For example, what are few-shot settings? Also, the authors should clarify some hyper-parameters such as optimizer selection and learning rate.\\n\\n\\n [R1] Tao Sun, Dan Sheldon, Brendan O\\u2019Connor, \\u201cA probabilistic approach for learning with label proportions applied to the US presidential election\\u201d, in ICDM, pp. 445-454, 2017.\\n\\n [R2] S. R. Flaxman, Y.-X. Wang, and A. J. Smola, \\u201cWho supported Obama in 2012?: Ecological inference through distribution regression,\\u201d in Proceedings of the 21st ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2015, pp. 289\\u2013298.\\n\\n [R3] H. Hajimirsadeghi and G. Mori, \\u201cMulti-instance classification by max-margin training of cardinality-based Markov networks,\\u201d IEEE Transactions on Pattern Analysis and Machine Intelligence, 2016.\\n\\n [R4] H. C. L. Law, D. Sejdinovic, E. Cameron, T. C. D. Lucas, S. Flaxman, K. Battle, and K. Fukumizu. Variational learning on aggregate outputs with Gaussian processes. In NeurIPS, pages 6084\\u20136094, 2018.\\n\\n [R5] D. R. Sheldon and T. G. Dietterich, \\u201cCollective graphical models,\\u201d in Advances in Neural Information Processing Systems, 2011, pp. 1161\\u2013 1169.\\n\\n [R6] D. Sheldon, T. Sun, A. Kumar, and T. G. Dietterich, \\u201cApproximate inference in collective graphical models,\\u201d in International Conference on Machine Learning (ICML), vol. 28, no. 3, 2013, pp. 1004\\u20131012.\", \"minor_comments\": [\"Modify the mathematical symbols to the italic font; for example, unit $i$ and features $X$ in the sentences of Section 2.2.1.\", \"Letters in the figures are small and hard to see.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
cotg54BSX8 | Grey-box Extraction of Natural Language Models | [
"Santiago Zanella-Beguelin",
"Shruti Tople",
"Andrew Paverd",
"Boris Köpf"
] | Model extraction attacks attempt to replicate a target machine learning model from predictions obtained by querying its inference API. Most existing attacks on Deep Neural Networks achieve this by supervised training of the copy using the victim's predictions. An emerging class of attacks exploit algebraic properties of DNNs to obtain high-fidelity copies using orders of magnitude fewer queries than the prior state-of-the-art. So far, such powerful attacks have been limited to networks with few hidden layers and ReLU activations.
In this paper we present algebraic attacks on large-scale natural language models in a grey-box setting, targeting models with a pre-trained (public) encoder followed by a single (private) classification layer. Our key observation is that a small set of arbitrary embedding vectors is likely to form a basis of the classification layer's input space, which a grey-box adversary can compute. We show how to use this information to solve an equation system that determines the classification layer from the corresponding probability outputs.
We evaluate the effectiveness of our attacks on different sizes of transformer models and downstream tasks. Our key findings are that (i) with frozen base layers, high-fidelity extraction is possible with a number of queries that is as small as twice the input dimension of the last layer. This is true even for queries that are entirely in-distribution, making extraction attacks indistinguishable from legitimate use; (ii) with fine-tuned base layers, the effectiveness of algebraic attacks decreases with the learning rate, showing that fine-tuning is not only beneficial for accuracy but also indispensable for model confidentiality. | [
"language models",
"transformer",
"model extraction",
"security"
] | Reject | https://openreview.net/pdf?id=cotg54BSX8 | https://openreview.net/forum?id=cotg54BSX8 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"Zm2KPWfSiCN",
"s-WHphcpdQM",
"snF8zsfo4dM",
"pREBjbTjZD2",
"R3ib6vYyKDr",
"OFa2e3LvL2P",
"kxTNL48DpAR",
"tELFg0KpEY0",
"DUpl0854on-",
"CZMk-m1WvFw",
"Tl4eK89Ywur",
"Es9JrCKoPTw"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040438023,
1606305557136,
1606266277354,
1606134745348,
1606133721833,
1606133587538,
1606133476162,
1606133006973,
1604169485291,
1603930143146,
1603788835042,
1602885809254
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3600/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3600/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3600/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3600/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3600/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3600/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3600/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3600/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3600/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3600/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3600/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"After discussion with the reviewers, it seems that a. without fine-tuning the result is close to being trivial (as noted also by two reviewers) b. with fine-tuning results are lower c. The setup of just a linear classification layer is less common (but exists) d. The cases where extraction succeeds the performance is low such that BERT would not even be used.\\n\\nIn response, the authors offer many interesting directions: a. Propose a new hybrid approach that combines learning-based and extraction-based methods b. Run experiments to try and support the claim that their setup of one linear layer with frozen layers is practical.\\n\\nThese proposed modifications are interesting and show that there is potential in this paper, but it deviates substantially from the original paper and still, caveats remain, so my recommendation is to re-submit after further pursuing the new directions proposed in the response.\"}",
"{\"title\": \"Thanks for following up!\", \"comment\": \"## Hybrid attacks\\n\\n> What was the 10% data used?\\n\\nWe used real NLI data. We combined the `test_matched`, `test_mismatched`, `validation_mismatched` datasets of MNLI and enough points from the `train` dataset (3,293, precisely). We evaluate accuracy and agreement on the `test_matched` dataset, which is disjoint from the ones we used for extraction.\\n\\n> How does it work with the RANDOM / WIKI strategies of [1]?\\n\\nFor an easier comparison to [1], we re-run this experiment using extraction datasets generated with the RANDOM and WIKI strategies (using code from https://github.com/google-research/language/tree/master/language/bert_extraction/).\\n\\nFor example, for BERT-Small fine-tuned on MNLI with learning rate 2e-5, using 32,768 queries generated with the WIKI strategy, distillation achieves 61.63% accuracy and 66.87% validation agreement. Running our algebraic attack on top of it achieves 61.57% accuracy and 67.11% validation agreement. \\n\\n## Frozen encoder layers\\n\\nFollowing the method described in [3], we also ran preliminary experiments in which only some of the layers of the BERT encoder are frozen and we repeated our algebraic extraction of the classification layer. Specifically, for BERT-Base (12 layers) we froze the embedding layer and the first 9 layers (as this was shown in [3] to give good performance) and trained the remaining layers and the classifier with learning rate 2e-5. \\nFor SST-2, this resulted in 89.45% target model accuracy. Algebraic extraction using 1,536 queries achieves 81.77% accuracy, 85.43% agreement on the validation dataset, and 88.87% agreement on random queries. Compared with Figure 2 in our paper, a similar target model accuracy is achieved with a learning rate of 1e-6, but our extracted model accuracy improves by 1%.\\nFor MNLI, this resulted in 79.71% target accuracy, 41.10% extracted model accuracy, 42.54% agreement on the validation dataset and 54.48% agreement on random queries. This corresponds to a point between 1e-6 and 1e-5 learning rate in our Figure 2.\\nWe will perform further experiments freezing different numbers of base layers to explore this further.\\n\\n> I encourage the authors to continue working in this direction.\\n\\nFor the next revision, we will run the following experiments and integrate the results into the paper body:\\n- We will run experiments with GPT-2 in addition to BERT, and multi-label classification tasks beyond MNLI: e.g., Yelp reviews star rating (5 classes), 20Newsgroups (20 classes).\\n- We already ran preliminary experiments with selective freezing of layers following [3]. We will run more thorough experiments freezing layers gradually and fine-tuning the rest using different learning rates.\\n- We will run thorough experiments using hybrid attacks for different base models and tasks, varying the learning rate, number of frozen layers, number of queries, and type of queries (i.e., NLI, Random, Wiki).\\n\\n> More work is needed before this paper is ready for publication.\\n\\nWe welcome suggestions for directions to explore beyond those we describe above.\\n\\n[3] https://arxiv.org/abs/1911.03090\"}",
"{\"title\": \"Thanks for your replies!\", \"comment\": \"Thank you so much for the detailed rebuttal, I really appreciate it. Here are my thoughts,\\n\\n> hybrid attacks\\n\\nThis looks like a promising direction! But how did you get 79% accuracy with just 10% data on BERT-base? Based on Table 3 in [1], this is only possible if you use real NLI data (the numbers in Table 3 of [1] use BERT-large and get about 81%, but are much lower for WIKI and RANDOM at 0.1x). What was the 10% data used? How does it work with the RANDOM / WIKI strategies of [1]?\\n\\nI completely agree that algebraic attacks are promising with a small query budget, and many experiments in [1] worked well only with liberal query budgets (which may not be practical for an attacker). I encourage the authors to continue working in this direction.\\n\\n> frozen encoder layers\\n\\nWhile I agree freezing BERT may not be a bad idea for computational reasons, my concern is having just a one-layer network on top of it with a softmax nonlinearity. The success of word embeddings and ELMo (pre-BERT) all came by much deeper networks on top of frozen pretrained representations. As your paper showed, just training one layer is not a good idea to optimize performance (might as well use a non-BERT model like the CNN from [2]).\\n\\n**Overall**: I really appreciate the rebuttal and the efforts put into conducting the hybrid attacks. I think they are a promising direction. However, I think more work is needed before this paper is ready for publication. The current set of experiments will not be sufficient for me to increase my score to acceptance.\\n\\n[1] - https://arxiv.org/pdf/1910.12366.pdf \\n[2] - https://arxiv.org/pdf/1408.5882.pdf\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"## Hybrid attacks\\n\\nWe hinted in the Discussion section in the submission at ways of combining both classes of attacks. Motivated by the Reviewer's questions we investigated this further and can report some positive results:\\n \\nWe ran a hybrid attack that first uses the (black-box) distillation approach from [1] to get a better approximation of the target model. We use the same hyperparameters as [1]: train for 3 epochs, batch size 32, learning rate 3e-5, Adam optimizer with decoupled weight decay (AdamW in PyTorch) and cross-entropy loss on soft labels. We use the resulting model instead of the pretrained base model in our algebraic attack to extract the classification layer, reusing the same queries as for distillation.\\n \\nWhen using a dataset ~10% the size of the training dataset of the target model, and targeting a model fine-tuned with a high learning rate (2e-5), the learning-based attack already performs well, but running our algebraic attack on top of it consistently results in a gain of around 1% in both accuracy and agreement. Given that no extra queries are required and the cost of the algebraic attack is negligible, this is a surprising gain. \\n\\nFor example, for BERT-Base fine-tuned on MNLI with learning rate 2e-5, using 32,768 queries distillation results in 79.03% accuracy and 86.98% agreement. The algebraic attack improves these results to 80.20% accuracy and 88.34% agreement. We included more results in Appendix A in the rebuttal revision. We will perform a more thorough evaluation in a later revision.\\n\\nWe think that algebraic and learning-based attacks can be complementary. Our experiments show that algebraic attacks outperform learning-based attacks when the target model's parameters are trained with a low learning rate, or when only a few queries are available. Our experiments also explore the limits of algebraic attacks and show that in some other scenarios learning-based attacks clearly perform better.\\n\\n \\n## Frozen encoder layers\\n\\nWe concur with the reviewer that it is in general undesirable to freeze all layers of a pretrained model during fine-tuning. This setting is particularly unfavourable for modestly sized models like BERT-Small and BERT-Base in the tasks that we consider, where simpler models do better. The goal of our experiments in this setting is indeed to confirm empirically that the theory behind the attack holds up when using real world models and data.\\n \\nWe note, however, that freezing layers of a pretrained model or fine-tuning them with a learning rate much lower than the one used for task-specific layers is not that uncommon in practice. For instance, the documentation of the HuggingFace Transformers library describes how to freeze all layers of a pretrained BERT model during fine-tuning (https://huggingface.co/transformers/training.html#freezing-the-encoder). The sample code provided freezes the BERT encoder layers **and** the `BertPooler` layer. Searching in GitHub and StackOverflow shows that this is a recurrent request from practitioners (e.g., https://github.com/huggingface/transformers/issues/400, https://github.com/google-research/bert/issues/637). As another example, the Keras developer guide on transfer learning and fine-tuning recommends initially freezing all layers of a pretrained model (Keras provides facilities for doing this) and only later optionally fine-tuning them using a very low learning rate (https://keras.io/guides/transfer_learning/).\\n \\nThe folklore that earlier layers of a model extract universal features while later layers perform task-specific modelling, together with computational resource considerations, also make it attractive to freeze earlier layers. This might be because training all layers is infeasible, or because one wants to share parameters between several downstream tasks. Although Adapters achieve a higher degree of parameter sharing, freezing early layers and only fine-tuning top layers is a simpler solution to share parameters between downstream tasks (https://arxiv.org/abs/1902.00751). Another recent evaluation shows that Transformer models do comparably well when only some final layers are fine-tuned (https://arxiv.org/abs/1911.03090).\\n \\nWe observed empirically that either freezing early layers of a model or training them with a lower learning rate improves performance of algebraic attacks. \\n \\n## Effectiveness\\n\\nThe reviewer is right that the 10% drop compared to learning-based attacks only holds for the SST-2 task. We corrected our claim in the paper.\\n\\nOur early results on hybrid attacks suggest they are effective in the practical setting of fine-tuning BERT with typical hyperparameters.\\n\\n## Minor issues \\n \\nGood observation. We added a footnote in Proposition 2 to explain that although a model with vocabulary $V$ and maximum sequence length $L$ can only produce $|V|^L$ different embeddings, in practice this is no more problematic than using finite-precision arithmetic (where orders of magnitude fewer numbers are representable).\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"## Theoretical results\\n\\nThe problem of extracting models where only task-specific layers are fine-tuned (scenario A) is closely related, but not equivalent, to parameter estimation for regression (scenario B). The key difference is that for (A) the goal is to recover a fixed but unknown set of parameters (i.e. a ground truth) with a minimal amount of data, whereas for (B) one wants to find the best parameters to fit all the available data. In particular, subsampling is not a direct fit for (A) because \\n-\\twe cannot directly choose inputs to the classifier, only to the encoder whose outputs are the inputs of the classifier.\\n-\\tany linearly independent set of embeddings will allow to achieve (A).\\nWe added a discussion on the relationship between (A) and (B) to the related work section.\\n\\nSection 2 is simply intended to explain the methodology of our attack. We use propositions to structure the presentation, but we do not claim that any of the propositions in isolation are novel (or indeed non-obvious to expert practitioners). For instance, we explicitly mention that Proposition 1 is standard. Our contribution is to point out that the combination of these propositions entails a scenario where an algebraic attack is likely to succeed in practice. This attack scenario is novel and has not yet been explored. In Section 3, we empirically show that the attack works not only in the *clean-room* case when $\\\\eta=0$, as one can reasonably hypothesize, but also beyond, which is more surprising.\\n\\n## Practical results\\n\\n1.\\tWe include the $\\\\eta=0$ case as a *clean-room* illustration of the theory described in Section 2. We also believe it has practical relevance. See our response to Reviewer 2 for a more detailed description of cases where freezing some layers of a pretrained model is beneficial.\\n\\n2.\\tThe question of how to measure the distance between two models is very interesting (even in the white-box case for one-layer logistic regression models). We compared our parameter estimates for the extracted layer against the target parameters using L\\u221e distance. This simplistic metric is a good predictor of how an attack performs when varying hyperparameters for a fixed target model, but it is not meaningful when comparing results across different target models where the magnitudes of parameters can vary widely (as shown in Table 2, even with relatively large L\\u221e distances, an attack can still do well). Agreement (on in-distribution and out-of-distribution inputs) is a much more meaningful metric that directly aligns with the *high-fidelity* goal of algebraic extraction. It would be interesting to explore this question further. For instance, measuring cross-entropy of soft labels rather than agreement on hard labels could be informative.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"## Positive use\\n\\nOne of the goals of our experiments was to understand how algebraic attacks perform using different types of queries, i.e., in-distribution versus random. We observe that in-distribution queries are more effective and hence it is harder to detect and defend against the attack by observing only the query patterns in the input space. As a next step towards learning a model's potential weakness, we plan to:\\n\\n1.\\tAnalyse the embedding space (instead of the input space) for random and in-distribution queries as suggested by Reviewer 4.\\n2.\\tExplore if we can identify inputs whose embeddings are less affected by fine-tuning and can be used for more effective extraction. \\n\\nThere is an intriguing connection to solutions to *catastrophic forgetting* (aka catastrophic inference), where the goal is to preserve knowledge about previously learned tasks. A pre-trained encoder model that is more robust to catastrophic forgetting will preserve the embeddings of out-of-distribution queries better when fine-tuned, increasing its susceptibility to algebraic extraction attacks.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"## Positive points + questions\\n\\n> Is there a difference in extraction results when using in-distribution queries vs. random?\\n\\nWe added full experimental results for extraction with in-distribution and random queries in Appendices B and C. Models extracted using in-distribution queries show better agreement with the target model on in-distribution inputs than on random inputs, with the gap closing as the fine-tuning learning rate increases and agreement on both types of inputs decreases. Models extracted using random queries show better agreement with the target model on random inputs than on in-distribution inputs but the gap does not close as the fine-tuning learning rate increases, with agreement on random inputs remaining high. See, e.g., Figures 5 and 6 in Appendix C in the rebuttal revision.\\n\\n## Negative points + questions\\n\\n> For the fine-tuning/learning rate experiments it would be good to evaluate this on more than just 2 tasks (e.g., maybe a range of different tasks in GLUE) not only to see if the trend still holds, but also to see if task \\u201ctype\\u201d or characteristics of the task/fine-tuning affect the extraction fidelity\\n\\nWe plan to add results for other language classification tasks (with a larger number of classes) as well as for GPT-2-Small in a later revision.\\n\\n> The extracted model accuracy of BERT-base with MNLI seems to be quite static (almost no effect on increasing or decreasing learning rate)---and it would be really helpful to see how statistically significant those results are and what they look like over different seeds.\\n\\nWe performed a more thorough evaluation (including additional experiments) to answer this question. For each combination of hyperparameters (#queries, learning rate, random vs in-distribution queries, model type), we repeated the attack with 5 different random seeds. The variability is visible in the bands around the curves depicting average model accuracy and agreement in the full results in Appendix B and C and in Figures 1 and 2 in the body of the rebuttal revision. \\nThe accuracy of models extracted from BERT-Base fine-tuned on MNLI indeed does not vary much with the learning rate, contrary to agreement on random and in-distribution queries. This reflects the *high-fidelity* goal of algebraic extraction attacks, which prioritize agreement over accuracy.\\n\\n> Is there a comparison between the algebraic approach and a learning-based approach for the same tasks? (I think the paper is novel and useful enough in itself, but it would be helpful to see a side-by-side comparison).\\n\\nWe ran additional experiments comparing the algebraic and learning-based approaches, as well as a hybrid approach that combines both. We report the results in Table 3 of Appendix A.\\n\\n> Is there a comparison between extracting only a single layer or going beyond to having multiple layers of target/finetuned classifiers? Is this approach feasible and similarly beneficial as a grey-box attack in that scenario? It would be really helpful to have a discussion on what that would require for future work.\", \"our_grey_box_attack_could_be_extended_to_extract_multiple_task_specific_layers_by_generalizing_the_attack_from_https\": \"//arxiv.org/abs/2003.04884. This is in principle possible when the added layers form a piecewise-linear network (e.g., a MLP with ReLU activations). The generalization seems feasible when the encoder is frozen and the inputs to the piecewise-linear network are known. It would be very challenging to further generalize the method when the encoder layers are fine-tuned. Specifically, the method relies on testing hypotheses by observing the failure or success of extracting weights. This would be unreliable when the inputs to the piecewise-linear component are not known with certainty.\"}",
"{\"title\": \"Summary\", \"comment\": \"We sincerely thank all the reviewers for their thoughtful feedback!\\n\\nWe summarize the changes we have made in the updated PDF after the rebuttal and briefly describe the additional experiments here. We address technical questions in comments to individual reviews.\\n\\n-\\tWe performed 5 runs of our experiments for random and in-distribution queries using different random seeds, for all hyperparameter combinations. The full experimental results (including variability in runs) are given in Appendix B for varying learning rate and Appendix C for varying number of queries.\\n\\n-\\tWe ran additional experiments that compare algebraic and learning-based extraction attacks, as well as a hybrid approach combining both attacks (following reviewers\\u2019 suggestions). We report the results in our comments to individual reviews and present them in Table 3 in Appendix A. We will include a more thorough evaluation in a later revision.\"}",
"{\"title\": \"Important theoretical + empirical results for model extraction attacks, which is helpful and insightful for general NLP interpretability/probing work as well.\", \"review\": \"Summary:\\n\\nThis paper proposes a range of algebraic model extraction attacks (different from the prevalent learning-based approaches) for transformer models trained for NLP tasks in a grey-box setting i.e., an existing, public, usually pretrained encoder, with a private classification layer. Through attacks on different sizes of models and a range of downstream tasks, they observe that only a portion of the embedding space forms a basis of the tuned classification layer\\u2019s input space, and using a grey-box method, this can be algebraically computed. The pretraining-finetuning experiments on different tasks also show the smallest number of dimensions needed for high-fidelity extraction, and also that the model extraction attacks effectiveness decreases with fine-tuning the larger models base layers---which is an insight that is very useful for a lot of interpretability/probing work.\", \"reason_for_score\": \"I think this paper is very well-formulated---both theoretically and empirically with promising results that will be useful not just for grey-box adversarial attacks, but also for works interesting in the effects of pretraining-finetuning (which at this point encompasses nearly all NLP tasks). The empirical results look promising---however I would like to see this demonstrated on more than just 2 datasets (and maybe even a GPT-like model, instead of just BERT) to see if (1) the results hold empirically and (2) if there any insights to be gleaned about adversarial attacks from different task structures and model types.\\n\\n\\nPositive points + questions:\\n\\n1. The transformation of the raw logits for recovering information is really interesting. In the experiments for the random set of n embeddings chosen to form a basis of the last layer\\u2019s input space---are there any insights on what those embeddings amount to semantically; and also what a ground truth selection of embeddings (e.g., that an oracle adversary would compute) should be? It would be helpful to have a discussion and examples of those.\\n\\n2. Is there a difference in extraction results when using in-distribution queries vs. random? Most of the results say \\u201cextraction is possible with both\\u201d which is good to see, but a more finer-grained analysis/explanation of benefits/pitfalls of each would really help clarity.\\n\\n3. It\\u2019s nice that both a single-sentence and pairwise-sentence (SST-2 vs. MNLI) task are used to evaluate effects for the fine-tuning experiments in big transformer models.\\n\\n4. The results look very promising and these insights are extremely helpful even for general probing/interpretability works (especially the learning rate finetuning effects) and also hold up to existing BERT-finetuned results.\\n\\n5. Unlike previous work, this algebraic model extraction words even with non-linear activation layers---and this is helpful given the current standard of fine-tuning large transformer models e.g., with simple MLP/softmax classifiers. \\n\\n6. Slightly different from previous work, not only can this work when attacks require embeddings to be chosen, but also when selecting (e.g., random/or from a distribution) needs to be done as well. \\n\\n\\nNegative points + questions:\\n\\n1. For the fine-tuning/learning rate experiments it would be good to evaluate this on more than just 2 tasks (e.g., maybe a range of different tasks in GLUE) not only to see if the trend still holds, but also to see if task \\u201ctype\\u201d or characteristics of the task/fine-tuning affect the extraction fidelity.\\n\\n2. The *extracted model accuracy of BERT-base with MNLI seems to be quite static (almost no effect on increasing or decreasing learning rate)---and it would be really helpful to see how statistically significant those results are and what they look like over different seeds.\\n\\n3. Is there a comparison between the algebraic approach and a learning-based approach for the same tasks? (I think the paper is novel and useful enough in itself, but it would be helpful to see a side-by-side comparison).\\n\\n4. Is there a comparison between extracting only a single layer or going beyond to having multiple layers of target/finetuned classifiers? Is this approach feasible and similarly beneficial as a grey-box attack in that scenario? It would be really helpful to have a discussion on what that would require for future work.\", \"additional_minor_comments\": \"This is really well written and placed in literature, no minor nitpicks re: writing!\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Practical method for extracting semi-private language models with some demonstrated success\", \"review\": \"The paper proposes an algebraic attack for extracting the parameters\\nof a semi-private language model that consists of a pre-trained\\nencoder and a privately trained classification layer.\\n\\nThe method is to first sample from the input space, compute their\\nembeddings using the known encoder, and then use the embeddings and\\nthe queried classifier softmax output to solve for the classifier weights.\\nIt overcomes the obstacles encountered by former such attempts due to \\nthe requirements of known embeddings and raw logits.\\n\\nThe paper provides support for the method in arguing that a random\\nbasis (like an arbitrary set of embeddings obtained from encoding a\\nset of arbitrary, distinct input) is sufficient to serve as a basis\\nthat spans the classifier layer's input space, and that using the\\nsoftmax output instead of raw logits can lead to equivalent solutions\\nup to a translation invariance.\\n\\nExperiments on two public datasets and two versions of the BERT model\\nshow the effectiveness of the method, and demonstrate that\\nthe number of queries needed is relatively small,\\nthe probes can be drawn from the distribution of legitimate input,\\nand that fine tuning the encoder makes the attack less effective as the true\\nembeddings deviate from those computed from the publicly known\\nencoder.\\n\\nThe paper is well written and the method is sound and practical.\\nSuggestions on defenses against such attacks are of good reference\\nvalue.\\n\\nOne question is whether the proposed approach could be put to some\\npositive use, such as learning about a model's potential weakness in\\nthe input space?\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper considers a parameter estimation for the logistic regression and - no surprise - succeeds in it\", \"review\": \"##########################################################################\", \"summary\": \"The paper considers the reconstruction of the last layer for NLP data processing models. This problem is equivalent to the parameter estimation for logistic regression in the first of the paper and quite close to it in the second part when we purposely change the encoder via transfer learning.\\n\\nNo surprise, that the reconstruction in this setting works well. This is what we already know from linear algebra and Gauss-Markov [1], Bernstein-von-Mises like theorems in statistics [2, chapter 10].\\nMore interesting is the part about what is happening, when we deal with reconstruction under a transfer learning setting. In this case, we observe a predictable degradation of the quality of the models, but nothing more specific\\n\\n##########################################################################\", \"reasons_for_score\": \"I vote for rejection, as this paper doesn't contribute to our understanding of what is happening in real-world NLP models with many layers, rather focusing on the last layer fine-tuning.\\n\\n##########################################################################\", \"more_detailed_review\": \"################\\nTheoretical results\", \"all_proposition_in_the_paper_are_obvious_and_also_equivalent_to_the_recovery_procedure_for_the_coefficients_of_a_multiclass_logistic_regression\": \"1. Proposition 1 is obvious\\n2. Proposition 2 is obvious\\n3. Proposition 3 is obvious\\n4. Proposition 3 is obvious\\n\\nThe general statement that concludes this section and leads to further experiments should be compared to theoretical results for softmax (or multinominal) regression, see e.g. [3] for some details on the quality of the estimates in this setting. Also, see similar results for logistic regression in [4]. Both these papers present result on the quality of parameters' estimates in a more advanced subsampling setting, and even in this case, they provide the speed of converges for the error of parameter estimates. \\n\\nSo for the benefit of the quality of the paper, I suggest dropping all theoretical results as they are not new.\\n\\n################\\nPractical results\\n\\n1. Due to the reasons similar to that mentioned above the experiments for $\\\\eta = 0$ can be dropped to avoid confusion from the reader\\n2. For the setting with the fine-tuning of the models, we can see from experiments that after learning emerges a disagreement between the parameters estimates via the proposed procedure and the initial values of parameters. In particular, how can we measure the distance between two models even if they are one-layer logistic regression models, and can we do something if there is one layer in a setting closer to the white box problem. \\n\\n[1] Henderson, C. R. (1975). Best linear unbiased estimation and prediction under a selection model. Biometrics, 423-447.\\n[2] Van der Vaart, A. W. (2000). Asymptotic statistics (Vol. 3). Cambridge university press.\\n[3] Yao, Y., & Wang, H. (2019). Optimal subsampling for softmax regression. Statistical Papers, 60(2), 235-249.\\n[4] Wang H, Zhu R, Ma P (2018b) Optimal subsampling for large sample logistic regression. J Am Stat Assoc 113(522):829\\u2013844\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting research questions and thorough analysis, but attacks are weak in practical settings\", \"review\": \"Summary: This paper is an interesting study of algebraic model extraction attacks on modern NLP models based on BERT. Model extraction is the setting where a malicious attacker tries to reconstruct a copy of a black-box inference API without access to the original training data. Prior work [1] showed these attacks are possible on BERT models using a distillation-like learning method, using gibberish sequences of words as queries to the API. However, these attacks needed large number of queries for success. This work adopts a different strategy --- equation solving the parameters of the neural network using least square linear algebra methods. This not only allows extraction with lesser queries, but also ensures greater similarity between the API and extracted model (\\\"high fidelity\\\", [2]). The attacks in this paper work perfectly in settings where BERT is frozen and a single classification layer is fine-tuned. However, the attacks are not as effective in the more practical setting where BERT is fine-tuned, and the authors perform a thorough analysis varying critical hyperparameters.\\n\\n-----------------------------------\", \"strengths_of_the_paper\": \"1. This is a new attack setup (especially in the BERT fine-tuning setup), algebraic attacks have only been attempted on very shallow neural networks with ReLU activations. Algebraic attacks have several advantages for the attacker like high fidelity and small query budgets. In the frozen BERT, single layer setting this works perfectly with a very small query budget (however, see my Weakness #1).\\n\\n2. The paper is well written and easy to understand, and authors do a very good analysis of their attacks varying important hyperparameters like learning rate, number of queries, type of queries.\\n\\n-----------------------------------\", \"weaknesses_of_the_paper\": \"1. I don't think the setting where the attack works perfectly (frozen BERT with a single classification layer) is practical. Theoretically it's fairly obvious this should work, and I think the main contribution here is a empirical confirmation that it works with real data. There are a number of reasons why this is not practical --- (1) there are actually 2 layers between the sequence_output and the final logits, with a tanh activation (see https://github.com/google-research/bert/blob/master/modeling.py#L219-L232), or look for `BertPooler` in the HuggingFace code. These two layers are needed to separate the MLM representation from the logits. Even in the frozen setting, I anticipate fine-tuning atleast these two classification layers; (2) target accuracies are quite poor without fine-tuning. 75% on SST2 (Target Acc from Table 1) is quite poor, even a 1-layer CNN does much better and gets 83-88% accuracy (https://arxiv.org/abs/1408.5882). Similarly, the Target Acc. for MNLI is close to 33%. Without fine-tuning and just a single classifier layer, I don't expect people to use BERT; (3) Finally access to probability distributions / logits might be a strong assumption in structured prediction NLP tasks like question answering or NER.\\n\\n2. In the more practical setting of finetuning the model, the attacks are not effective. While I like the overall idea of leveraging the BERT pretrained checkpoint to do algebraic attacks, the authors' results show that this by itself is not sufficient to make an effective attack. The authors statement \\\"For the fine-tuned models, agreement drops to 10% below the learning-based state-of-the-art\\\" is not entirely correct. It is only true on the simpler SST-2 task, where even 1-layer CNNs perform exceedingly well. In the harder MNLI task, agreement is far lower than state-of-the-art [1], with a gap of 44% vs 82.2%. Performance of the extracted models on MNLI are quite low, about 40-45% in Figure 1 which is quite close to random guessing (BERT-base gets 84-85% accuracy).\\n\\n-----------------------------------\", \"overall_recommendation\": \"The authors did a good job with presentation and studied an interesting algebraic attack. However, the attack only works in an impractical setting of a frozen BERT, and is ineffective in the more practical setting of finetuning BERT. Intuitively, it's fairly obvious this attack should work in the frozen BERT, single layer setting. This result by itself is not sufficient for acceptance to ICLR. While I'm leaning reject, I encourage the authors to explore the BERT fine-tuning setting more. For instance, can a hybrid attack be constructed which uses the best of both worlds? Since queries do not seem to cost much [1], can the attacks be stronger in this hybrid setting with more liberal query budgets?\\n\\n-----------------------------------\", \"minor_issues\": \"In Proposition 1, uniformly sampling from a n-d cube is not entirely correct. BERT has a fixed discrete input space, since you only feed text as input to BERT. You are going to have a maximum of V^L unique points in the support of the [CLS] vector space (where V is the vocab and L is the maximum sequence length). Since V ~ 30k and L = 512, I guess it's not a problem practically.\\n\\n392.702 ---> 392,702\\n\\n-----------------------------------\", \"references\": \"[1] - https://arxiv.org/abs/1910.12366 \\n[2] - https://arxiv.org/abs/1909.01838\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
hE3JWimujG | Cortico-cerebellar networks as decoupled neural interfaces | [
"Joseph Pemberton",
"Ellen Boven",
"Richard Apps",
"Rui Ponte Costa"
] | The brain solves the credit assignment problem remarkably well. For credit to be correctly assigned across multiple cortical areas a given area should, in principle, wait for others to finish their computation. How the brain deals with this locking problem has remained unclear. Deep learning methods suffer from similar locking constraints both on the forward and backward phase. Recently, decoupled neural interfaces (DNI) were introduced as a solution to the forward and backward locking problems.
Here we propose that a specialised brain region, the cerebellum, helps the cerebral cortex solve the locking problem closely matching the computations and architecture of DNI. In particular, we propose that classical cerebellar forward and inverse models are equivalent to solving the backward and forward locking problems, respectively. To demonstrate the potential of this framework we focus on modelling a given brain area as a recurrent neural network in which the cerebellum approximates temporal feedback signals as provided by BPTT. We tested the cortico-cerebellar-DNI (CC-DNI) model in a range of sensorimotor and cognitive tasks that have been shown to be cerebellar-dependent. First, we show that the CC-DNI unlocking mechanisms can facilitate learning in a simple target reaching task. Next, by building on the sequential MNIST task we demonstrate that these results generalise to more complex sensorimotor tasks. Our cortico-cerebellar model readily applies to a wider range of modalities, to demonstrate this we tested the model in a cognitive task, caption generation. Models without the cerebellar-DNI component exhibit deficits similar to those observed in cerebellar patients in both motor and cognitive tasks. Moreover, we used CC-DNI to generate a set of specific neuroscience predictions. Finally, we introduce a CC-DNI model with highly sparse connectivity as observed in the cerebellum, which substantially reduces the number of parameters while improving learning through decorrelation.
Overall, our work offers a novel perspective on the cerebellum as a brain-wide decoupling machine for efficient credit assignment and opens a new avenue of research between deep learning and neuroscience. | [
"systems neuroscience",
"cerebellum",
"neocortex",
"decoupled neural interfaces",
"deep learning",
"decorrelation",
"inverse models",
"forward models"
] | Reject | https://openreview.net/pdf?id=hE3JWimujG | https://openreview.net/forum?id=hE3JWimujG | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"-_B_1wGl0Q",
"ytvZcRDf2O",
"WV-XA_uDIx6",
"qjaFwJO59dP",
"tDwoQaEfU-c",
"n0MTBxs_G7",
"Cm6240NYdnU",
"vZJ6DlckvCd"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040386123,
1606261995160,
1606261832611,
1606261675849,
1606261568657,
1604034423497,
1603723351501,
1603251296424
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3598/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3598/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3598/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3598/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3598/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3598/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3598/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"Reviewers split on this paper with one arguing that it is an intriguing and significant paper for both neuroscience and deep learning, whereas others argued that it fails to answer some key questions and stops short of offering testable predictions or novel findings. In particular Reviewer 2 questioned the limited experimental predictions and their experimental backing, as well as the plausibility of gradient calculations. Reviewer 4 raised more fundamental concerns about the significance of the paper's contributions. All reviewers appreciated the paper's clarity. Overall, though, Reviewers 2 and 4 raised enough significant concerns that I cannot recommend acceptance.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for the positive, constructive and detailed feedback. We believe we have now address the points raised. In particular, we have added three new figures, a new section (5) and extended the discussion to highlight predictions made by the model.\\n\\n1. Predictions and comparison with experimental findings: \\n\\n(i) New Fig. 5: Shows that the experimentally observed cerebellar expansion and sparsity are good parameters for pattern recognition tasks, which predicts that the cerebellum might have evolved to help with these types of tasks. This point was previously in the SM but seems to have been missed, so we decided to bring it to the main text.\\n\\n(ii) New Fig. 6: Inspired by classical neuroscience ablation studies we performed ablation experiments to predict when are the cerebellar estimations most important. Our results show that as learning progresses the cerebellum becomes less important even impairing learning once the main network can easily learn the task on its own. \\n\\n(iii) New Fig. 7: We performed a correlation analysis to highlight changes in correlation structure between the cerebellum and the main network. We calculated correlations between the main network and the cerebellar module using a simple feedforward CC-DNI to make a cleaner illustration of the point (but similar results should hold for RNNs). Our results show a drop in correlations of granule cells more initially correlated with the main network, and an increase for granule cells that end up with a high correlation, consistent with recent experimental findings (see Fig. 6B in Wagner et al. 2019 Cell). In addition, we predict that such changes in correlations should be more evident when comparing the main network with the output cerebellar nuclei. \\n\\n(iv) Other predictions: We have added several other predictions to the discussion. \\\"Moreover, the model also predicts which connections should project to GCs (source brain area activity) and the inferior olive (target brain area activity). In addition, as demonstrated by Czarnecki et al. (2017) the CC-DNI model also predicts that without a cerebellum the neocortical representations should become less distributed across different brain areas. Furthermore, as shown by Jaderberg et al. (2017) these models enable a network to be trained in a fully decoupled fashion, which means that it can update asynchronously. Given that the brain is asynchronous this may be a fundamental benefit of having a cerebellar system.\\u201d \\n\\n2. Gradients prediction: It is true that when approximating gradients as in our experiments, under the assumption that the task is learnt near perfectly (close to zero error), the model output will eventually become near silent. However, our framework is general and can be applied to predict any type of activity. In particular, in a biologically plausible model of backprop these is no longer going be the case as the model would simply predict (feedback) activity (e.g. Sacramento et al. NeurIPS 2018). To highlight this, we have used a simple feedforward DNI that predicts activity (Fig. 7 top) to show that when predicting activity the cerebellum will converge to some non-zero value which reflects the activity of the main network. \\n\\n3. Purkinje/cortical numbers: It is true that there are more cortical neurons than Purkinje Cells. But it is also true that not all cerebellar predictions might be useful at a given point in time, so we postulate that the thalamus which is the gateway between the cerebellar output and the neocortex decides which signals should be sent through, potentially fixing some of the issues that we observe in the ablation study (I.e. That the cerebellum signals can sometimes impair learning). The reserve would happen via the Pons when deciding which signals to send to the cerebellum. This is a direction of research that we are very much interested in exploring in the future.\", \"minor_points\": \"We apolagize for not being accurate when citing relevant papers. We agree and have fixed all the points raised. Regarding the review by Diedrichsen et al. they do contrast the universal and the modular view, but finish by saying \\u201cLooking across domains, we may ultimately discover a universal cerebellar transform\\u201d, so we thing this is a recent balanced review that is appropriate. However, we have also added more classical citations to support the idea of a universal cerebellar function.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"We thank the reviewer for the positive feedback and points raised. We have revised the substantially manuscript to address the points raised.\\n\\n1. Lack of new insights: Our model architecture is mapped onto cerebro-cerebellar circuits to show similar deficits in motor and non-motor tasks to what has been observed in cerebellar-patients. To the best of our knowledge this is the first time that this has been demonstrated. This in turn means that it opens a new sub-field between cerebellar neuroscience and deep learning methods. However, we do agree that more insights should be included. We have added three new figures and one new section (5) with numerous predictions that the model makes regarding cerebellar function: \\n\\n(i) New Fig. 5: Shows that the experimentally observed cerebellar expansion and sparsity are good parameters for pattern recognition tasks, which predicts that the cerebellum might have evolved to help with these types of tasks. This point was previously in the SM but seems to have been missed, so we decided to bring it to the main text.\\n\\n(ii) New Fig. 6: We performed ablation experiments to predict when the cerebellar estimations are most important. Our results show that as learning progresses the cerebellum becomes less important even impairing learning once the main network with BPTT can solve the task on its own. \\n\\n(iii) New Fig. 7: We calculated correlations between the main network and the cerebellar module using a simple feedforward CC-DNI. Our key result is that only a subset of units become more correlated over learning, consistent with recent experimental findings (Wagner et al. 2019 Cell). \\n\\n(iv) We have also extended the discussion with more predictions: \\\"Moreover, the model also predicts which connections should project to GCs (source brain area activity) and the inferior olive (target brain area activity). In addition, as demonstrated by Czarnecki et al. (2017) the CC-DNI model also predicts that without a cerebellum the neocortical representations should become less distributed across different brain areas. Furthermore, as shown by Jaderberg et al. (2017) these models enable a network to be trained in a fully decoupled fashion, which means that it can update asynchronously. Given that the brain is asynchronous this may be a fundamental benefit of having a cerebellar system.\\u201d \\n\\n2. Cerebellum-inspired insights for DL: There are many possible avenues to extend DNI models inspired by the cerebellum. We now highlight the functionally distinct modular structure of the cerebellum and the link to bootstrapped learning as used in DNIs. One of the key draw backs of DNIs is that they struggle to learn temporal gradients. Having multiple modules bootstrapping onto each other generalizes the ideas introduced by Jaderberg et al. 2017 and can in principle lead to DNI models that learn more quickly.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank the reviewer for the enthusiastic and constructive feedback, which we have taken on board.\\n\\n1. Focus on temporal feedback signals: The main point made by the reviewer is a point that we have tried hard to get right. We are introducing a general framework, but needed to focus on a specific aspect of this more general framework, which we decided to be the temporal domain as it is easier to demonstrate the benefits of DNI (note, however, that they are not unique to the temporal domain). In the revised version we have a few things to make our presentation clearer: \\n\\n(i) we have from the outset (abstract) made it clear that although our framework is general, we are focusing on temporal feedback (i.e. BPTT). \\n\\n(ii) we have change Figure 1 to reflect this. We now present the general framework, and then provide more details on exactly how we deal with temporal delayed feedback problem using BPTT (Fig. 1a,b). \\n \\n\\n2. Bootstrapping clarification: We totally agree that we should have been clearer about this. As pointed out by the reviewer a key element is the use of bootstrapping, which we make now explicit (Fig. 1c) and point out that this can be interpreted as a given cerebellar module and potentially other modules helping one another to speed up cerebellar learning. \\n \\n\\n3. DNI vs (s)CC-DNI: Our sparse CC-DNI model indeed provides an improvement on the current DNI as shown by our results. The only difference between the (non-sparse) CC-DNI model and the DNI is that the ratio of LSTM/DNI units closely follows what is observed in biology. We have now clarified this as follows: \\u201cThis (our model) is different to DNI, in which the synthesizer contains a single hidden layer with the same number of units as LSTM\\u201d. \\n\\n\\n4. In addition, we have added three new figures and a new section (5) with specific model predictions: \\n\\n(i) New Fig. 5: Shows that the experimentally observed cerebellar expansion and sparsity are good parameters for pattern recognition tasks, which predicts that the cerebellum might have evolved to help with these types of tasks. This point was previously in the SM but seems to have been missed, so we decided to bring it to the main text.\\n\\n(ii) New Fig. 6: We performed ablation experiments to predict when are the cerebellar estimations most important. Our results show that as learning progresses the cerebellum becomes less important even impairing learning once the main network with BPTT can solve the task on its own. \\n\\n(iii) New Fig. 7: We calculated correlations between the main network and the cerebellar module using a simple feedforward CC-DNI. Our key result is that only a subset of units become more correlated over learning, consistent with recent experimental findings (Wagner et al. 2019 Cell).\"}",
"{\"title\": \"General response\", \"comment\": \"We would like to thank all the reviewers for the detailed, insightful and positive comments.\\n\\nThe main point raised was the lack of specific insights or predictions. We have addressed this directly by adding three experiments to the main manuscript, a new section and an extended discussion highlighting specific predictions, which we contrast with existing literature where possible. The fact that our predictions are consistent with existing observations, further supports our model. \\n\\nFollowing the advice from R1 we have also revised the manuscript substantially to make it clear that although we introduce a general framework, in this paper we focus on the temporal feedback problem. We further clarified the use of bootstrapping in our model, which made us suggest that the numerous modules known to exist in the cerebellum may act as an efficient bootstrapping mechanism that generalizes the one introduced by Jaderberg et al. 2017. \\n\\nOverall, we believe that the paper has been substantially improved after addressing the reviewer's comments.\"}",
"{\"title\": \"Sginificant and original hypothesis linking ML and neuroscience; some clarity issues.\", \"review\": \"**General**\\n\\nThe paper presents a very intriguing hypothesis, and I believe that its publication will benefit the community and stimulate fruitful discussion. The model seems to offer a compelling and fairly novel explanation of cerebellar deficits (including non-motor) with a broad significance across the neuroscience and deep learning communities. That said, I think that there are opportunities for improving clarity and filling in details that will be lost on readers who aren't strongly familiarity with Jaderberg et al.\\n\\nThe paper currently presents the model primarily in the feedforward setting, and the results in the recurrent setting. This becomes confusing, since the two settings can be associated with different locking problems in neuroscience (i.e. bio-plausible alternatives to backprop vs. learning from delayed rewards / bio-plausible BPTT). For instance, the abstract and main text intro (\\\"a given layer has to wait for all the layers above to finish computing its gradients\\\") seems to propose a solution for feedforward bio-plausible backprop. In the results, however, it focuses primarily on the ability to learn from delayed signals with BPTT models. I think it may help to focus earlier on the recurrent setting with references to the delayed feedback problem. \\n\\nIn Section 2, the cortical network uses the backward synthesiser to avoid needing to wait for the loss signal - however, this seems to merely shift the problem since now the synthesiser will need to wait for the loss signal to train. Reading Jaderberg et al. (and the SM), it's clear that the synthesiser is instead continually trained on bootstrapped estimates. I think that a reference to bootstrapping in the main text (and a reference to the analogy with bootstrapped value functions in RL, as in Jaderberg) would make the model much clearer.\\n\\nSince Jaderberg et al. have already shown that DNIs improve on truncated LSTMs, it seems like a more interesting comparison in the results would be DNI vs. CC-DNI. If the authors are proposing that CC-DNI is a competitive deep learning approach than I would like to see something like the original DNI architecture included as a baseline. Otherwise, I would at least like to see a clearer description of the architectural differences between DNI (as previously implemented) and CC-DNI/sCC-DNI. \\n\\n**Details**\\n\\n- Do the predictions/consequences in Fig. 1, g, roughly correspond to dL/dh_i? If so it would help to make this explicit.\\n- Figure 2(f)(g) are labelled in the caption as (e)(f).\\n- Unlike Jaderberg et al., the paper combines update and backwards locking under the same label. A clarification, either in the SM or a footnote, would help.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"A clever mixture of existing ideas\", \"review\": \"The authors consider the architecture of the cerebellum as the predictive component of a decoupled neural interface (DNI). Using this framework, they perform experiments training networks with BPTT on several temporal tasks.\\n\\nThe paper is exceptionally clear and the experimental investigations are well-done. \\n\\nHowever, it does not offer any new insight into either the cerebellum or DNI; rather it simply juxtaposes the details of two existing bodies of knowledge. The two statements that might most closely constitute new insight\\u2014that DNI is helpful on temporally challenging tasks and that sparsity-induced de-correlation can be helpful\\u2014were both established within their respective research domains. The logical induction that the authors make \\\"predicting that the cerebellum is particularly important for more temporally challenging tasks\\\" does not require the DNI to be established. That enforcing the architectural constraints of sparsity within a DNI might be helpful, although curious, does not constitute sufficiently extensive findings for a publication, and (although aided by) does not require connecting DNIs and the cerebellum.\\n\\nMuch like the platitude \\\"the brain is like a computer\\\" offers no new insight into either computers or brains, here, I do not believe that authors' investigations, although amusing to follow-along with, have added significant insight into either the cerebellum or DNIs. Therefore, I cannot offer strong support for acceptance.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting idea but relatively weak connection to cerebellum\", \"review\": \"The authors proposed that cerebellum in the brain computes synthetic gradients, as used in decoupled neural interfaces (DNI), to enable learning in neural circuits without waiting for gradients to propagate backwards. The authors incorporated several architectural properties of biological cerebellum into their cerebellar model. They showed that a LSTM trained with such synthetic gradients can learn a variety of tasks, from motor reaching to caption generation. The paper is clearly written, and the link between DNI and cerebellum is a novel idea. However, the authors made little attempt to actually compare their model with experimental findings from cerebellum (except for the cerebellar properties built into the network), limiting its scientific impact. Meanwhile, it is not clear whether the cerebellum-inspired DNI provides concrete advantages over DNI proposed in Jaderberg 2017.\", \"major_comments\": \"(1) The authors made little attempt to actually compare their model with experimental findings, or at least make concrete testable predictions.\\n\\nMain results (Fig. 2-4) are mostly showing that their core models, CC-DNI and sCC-DNI, can successfully reduce losses across a variety of tasks, and do so better than heavily truncated BPTT gradients.\\n\\nExcept for the architectural features incorporated into the model, and some loose arguments that cerebellum is involved in sensory, motor, and cognitive tasks, the link between the model and biological cerebellum appears somewhat weak.\\n\\nThe authors claimed that \\u201cour work makes numerous predictions and opens the exciting door to explicit comparison\\u2026\\u201d without actually spelling out any key prediction. One prediction of the model is that when a task is very well-learned, the gradients (both real and predicted) should be close to 0, and cerebellum output neurons in the deep cerebellar nuclei should be somehow silent. Another prediction is that cerebellum lesioning should only impact learning, but not performance of well-learned tasks. I\\u2019m not sure these predictions are supported by empirical evidences. \\n\\nI would feel much more comfortable supporting this manuscript if the authors provide more comparisons with experimental data, and make more concrete testable predictions.\\n\\n(2) Critical questions about how real gradients are computed and transmitted to inferior olive is not answered.\\n\\nFor the brain, the backward lock may not be the most acute issue when searching for approximated gradient descent in brains. DNI relies on computing the real gradients, and using it to train the generation of synthetic gradients. Both computations are challenging in the brain. The authors completely circumvent this problem by saying it is \\u201coutside of the scope of the current paper\\u201d. However, I think this issue is critical for considering the feasibility of the proposed mechanism. For example, how can cerebellum learn to predict the gradient of individual cortical neurons when there are many fold less Purkinje cells than cortical neurons? There are of course more granule cells than there are cortical neurons, but in the model, granule cells are not the ones representing the synthetic gradient, \\\\hat{g}_M, right?\\n\\nMinor\\n(1) The references are at a number of places somewhere between inaccurate and incorrect. Here are a few that I noticed.\\n\\nIn the introduction, the authors wrote \\u201cThese observations suggest that the cerebellum implements a universal function across the brain (Diedrichsen et al., 2019)\\u201d. However, if I\\u2019m not mistaken, the Diedrichsen review is arguing the exact opposite that cerebellum is not implementing a universal function.\\n\\nIn section 2.1.1, the authors wrote \\u201cHere we use LSTMs (Hochreiter and Schmidhuber, 1997) as a model of cortical networks, which have recently been mapped onto cortical microcircuit (Costa et al., 2017) \\u201d. Costa 2017 provided a potential way to link cortical microcircuit to a LSTM-like structure. I don\\u2019t think it\\u2019s fair to say that LSTMs are \\\"mapped\\\" onto cortical microcircuits.\\n\\nIn section 2.1.2, the authors wrote \\u201cOn the other hand Bellec et al. (2019) showed that temporal gradients as used in BPTT are equivalent to using eligibility traces that transmit gradient information forward in time\\u201d. The eligibility-trace-based algorithm proposed by Bellec 2019, namely e-prop, is not \\u201cequivalent\\u201d to back-prop. It is an approximation that works well empirically in the cases studied in that paper.\\n\\n(2) Fig. 2 panels are mislabeled.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
V8YXffoDUSa | Iterative convergent computation is not a useful inductive bias for ResNets | [
"Samuel Lippl",
"Benjamin Peters",
"Nikolaus Kriegeskorte"
] | Recent work has suggested that feedforward residual neural networks (ResNets) approximate iterative recurrent computations. Iterative computations are useful in many domains, so they might provide good solutions for neural networks to learn. Here we quantify the degree to which ResNets learn iterative solutions and introduce a regularization approach that encourages learning of iterative solutions. Iterative methods are characterized by two properties: iteration and convergence. To quantify these properties, we define three indices of iterative convergence. Consistent with previous work, we show that, even though ResNets can express iterative solutions, they do not learn them when trained conventionally on computer vision tasks. We then introduce regularizations to encourage iterative convergent computation and test whether this provides a useful inductive bias. To make the networks more iterative, we manipulate the degree of weight sharing across layers using soft gradient coupling. This new method provides a form of recurrence regularization and can interpolate smoothly between an ordinary ResNet and a ``recurrent" ResNet (i.e., one that uses identical weights across layers and thus could be physically implemented with a recurrent network computing the successive stages iteratively across time). To make the networks more convergent we impose a Lipschitz constraint on the residual functions using spectral normalization. The three indices of iterative convergence reveal that the gradient coupling and the Lipschitz constraint succeed at making the networks iterative and convergent, respectively. However, neither recurrence regularization nor spectral normalization improve classification accuracy on standard visual recognition tasks (MNIST, CIFAR-10, CIFAR-100) or on challenging recognition tasks with partial occlusions (Digitclutter). Iterative convergent computation, in these tasks, does not provide a useful inductive bias for ResNets. | [
"Residual neural networks",
"Recurrent neural networks",
"Computer vision"
] | Reject | https://openreview.net/pdf?id=V8YXffoDUSa | https://openreview.net/forum?id=V8YXffoDUSa | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"oP8RDa9IWX",
"pItLZM4dMtZ",
"o90xOiujU78",
"DwHC9Q_2dgu",
"N0sFnTaGOac",
"pqzPqbPBdjn",
"HJLFt1Xn3Iv",
"71zuAXFpnY3",
"zOvjC5Ka7t",
"BvQG62n-sFh",
"dqBAFda_34p",
"qhYGNIWFmb",
"JFD0ijnlZ03",
"aHAvnTpjGpA",
"vGVFGJbwqw9",
"RZZc865Tn9v"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040358426,
1606301869966,
1606301437647,
1606300845564,
1606300284928,
1606300211562,
1606299980366,
1606299910363,
1606299711963,
1606299275207,
1606299196190,
1606298859854,
1603959698321,
1603903882489,
1603903422216,
1603898744991
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3597/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3597/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3597/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3597/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3597/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3597/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3597/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3597/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3597/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3597/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3597/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3597/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3597/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3597/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3597/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This work provides evidence against the hypothesis that ResNets implement iterative inference, or that iterative convergent computation is a good inductive bias to have in these models. The reviewers indicate that they think this hypothesis is interesting and relevant to the ICLR community, but they do not find the current work sufficiently convincing. Both theoretically and experimentally the paper does not fully demonstrate the claim that iterative inference is not useful in ResNets, and the reviewers are unanimous in their recommendation to reject the paper until the evidence for this claim is strengthened.\"}",
"{\"title\": \"Response pt. 2\", \"comment\": \"> The recurrence index was normalized, yet there are points with a recurrence index greater than 1. Is there any explanation for this?\\n\\nA recurrence index greater than 1 essentially indicates that dropping out earlier blocks may result in better performance than dropping out the last block. Since dropping out any block does not strongly affect the ResNet's performance, this may occasionally happen.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your review! We found your suggestions very helpful and lay out below how we attempted to address them in our rebuttal. Moreover, our results on the performance of the ResNets now include a measure of variation.\\n\\n> The strong conclusion that ResNets do not benefit from recurrence regularization is premature, given the current set of experiments presented in this manuscript. As the authors themselves point out, \\\"iterative computation\\\" is an inductive bias. However, there is little reason to believe that this inductive bias is the right one for a classification problem. Have the authors tried to consider problems other than image classification? For instance, there has been recent literature on the relevance of iterative computation (visual routines) for contour detection and segmentation problems. Moreover, \\\"accuracy\\\" is not the only way to quantify the benefit. Have the authors tried to measure sample efficiency? i.e., can a ResNet employing iterative computations learn from fewer training samples than a non-iterative ResNet?\\n\\nDuring the rebuttal, we have explored several new tasks. First of all, we found no benefit in recurrence regularization when training ResNets on CIFAR-10 with only 2048 samples. We also further explored Digitclutter, a task which involves recognizing several overlapping digits. Due to these partial occlusions, Digitclutter and related tasks have previously been demonstrated to benefit from recurrent processing (Wyatte et al., 2014; Spoerer et al., 2017). For the investigated ResNets, however, encouraging recurrent processing using higher coupling parameters did not provide a useful inductive bias.\\n\\nWe appreciate the reviewer\\u2019s suggestion of potential experiments that may make our conclusion more convincing. We hope that the experiments we added in the rebuttal take a first step into that direction, but have also modified our language to emphasize that our conclusion, for now, are limited to the examined set of experiments. We have also added a paragraph in section 2 laying out why the Digitclutter task is particularly relevant in the context of recurrence regularization. In the future, it would be very interesting to train recurrence-regularized ResNets on a more diverse set of tasks such as image segmentation.\\n\\n> How does the performance benchmarking of a \\\"fully recurrent\\\" ResNet compare to a comparable-sized LSTM/GRU trained on this task? Or even a weight-shared ResNet? These comparisons seem to be necessary to discern if the soft gradient coupling is introducing other artificial biases.\\n\\nWhereas we have not trained an LSTM or GRU on this task, we note that a fully coupled ResNet corresponds to a weight-shared ResNets; if the parameters\\u2019 gradients are fully coupled, the parameters themselves will remain equal across blocks within a stage throughout the entire training. This is the reason why soft gradient coupling allows us to interpolate between an ordinary and a recurrent ResNet. It would, however, certainly be interesting to explore alternative forms of recurrence regularization to see if the soft gradient coupling is introducing any biases.\\n\\n> It is unclear why the authors believed that a high degree of soft-gradient coupling would help with the divergence issue in the first place. Implementing the same computation repeatedly only converges (and stays there) given certain other properties of the transformation function applied (for instance, the spectral radius of each Residual block's Jacobian). There is quite a bit of theoretical/empirical work in the literature in this regard.\\n\\nThis is absolutely right and we have modified our language to make clear that the divergence of recurrent networks is not surprising. In addition, we have added a new set of models where the spectral radius of the residual function within each block is constrained. This allowed us to explore whether convergence provides a good inductive bias for ResNets. Our results suggest that it does not (see section 5.2 and 5.3), further supporting our conclusion that iterative convergence, under the four tasks we studied, may not be a useful inductive bias. In introducing our method of making ResNets convergent (see section 4.2 and Appendix B), we also discuss theoretical convergence results from the literature.\\n\\nWyatte, D., Jilk, D. J., & O\\u2019Reilly, R. C. (2014). Early recurrent feedback facilitates visual object recognition under challenging conditions. Frontiers in Psychology, 5, 674.\\n\\nSpoerer, C. J., McClure, P., & Kriegeskorte, N. (2017). Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition. Frontiers in Psychology, 8.\"}",
"{\"title\": \"Response pt. 3\", \"comment\": \">About the soft gradient coupling, doesn't this simply mix the gradients and inject more stochasticity to them? In general, would you expect (when $0 < \\\\lambda < 1$) that just like in typical SGD, this stochasticity will be averaged out by the optimization procedure of deep networks? Since the $\\\\tilde{\\\\Delta}_t$ no longer fully reflect the mini-batch gradient descent direction, have you checked how the block parameters within the same stage gradually deviate from one another as you optimize the network (e.g., how does the standard deviation of $\\\\mathbf{W}_l^{(s)}$ over all layer $l$'s in the same $s$ change over training iterations? Do they deviate or stay around? If these weights are eventually still different, why can one still consider them to be \\\"similar\\\" (other than the RI metric, which I find to be a debatable metric given the #3 above...)?\\n\\nThe reviewer raises a number of important questions here. Regarding the latter question, the average deviation between block parameters within the same stage after training decreases with higher coupling parameters (see Fig. 7c). In that sense, they are therefore more similar to each other. Here we define this deviation as the average absolute difference between the block parameters and their mean across blocks within the same stage.\\n\\nRegarding the former question, we would expect the bias introduced by soft gradient coupling not to be averaged out over multiple samples. After all, soft gradient coupling -- consistent, throughout all samples of SGD -- is biased towards aligning the gradients of the parameters across blocks within a stage and therefore encourages the optimization procedure to find more recurrent minima.\\n\\n> For the EPC, have you computed the EPC of an ordinary ResNet and a purely recurrent ResNet? How do their EPC look like when compared to the soft gradient coupled ResNets (e.g., $\\\\lambda=0.5$)?\\n\\nFigure 7 (in the appendix) depicts the EPC plotted against the raw parameter count. The EPC of fully recurrent and non-recurrent ResNets is roughly equal to their raw parameter count and decreases for higher intermediate coupling parameters. Note that our revised manuscript has shifted the focus away from the effective parameter count and we have therefore moved this section to the appendix.\\n\\n> In Section 5.3, the paper claims that \\\"if this is the case, we would expect soft gradient coupling to find such a solution.\\\" Why?\\n\\nOur language here was too strong and we have modified it accordingly. We would expect that soft gradient coupling might be able to find such a solution because the coupled gradients shift the training dynamics towards more recurrent solutions without the parameter space being constrained to a, perhaps overly restrictive, space of fully recurrent networks.\\n\\n> And isn't a soft gradient coupled ResNet still a non-recurrent ResNet (in the sense that you can't simply unroll a single layer to get the output; you still need to store all parameters of the network, rather than only a single layer of it)?\\n\\nIf $\\\\lambda<1$, this is true. However, as the EPC demonstrates, in a softly coupled network, the parameters of the individual blocks within a stage are less spread out, which may allow us to more easily find a partly recurrent structure expressing this computation. However, this is merely speculative at the moment and we have therefore shifted the focus of the article away from this point. Instead, we have decided to focus on the more concrete question whether iterative convergent behavior provides a useful inductive bias.\"}",
"{\"title\": \"Response pt. 2\", \"comment\": \"> One main problem that I found about this paper is its definition of the convergence/divergence indices. The \\\"convergence\\\" concept in this paper is constrained to look at the accuracy convergence, by which the authors look at the inverse of the AUC of the classification rate curve. But given the nature of softmax and classification task itself, I don't think a convergence in accuracy is a good \\\"index\\\" for measuring convergence of an architecture, which Section 3.1 looks at (for $\\\\hat{z}_i^{(t)}$). For example, softmax is constant up to a shift of constant. And for classification of, let's say 10 objects $(x_1, \\\\dots, x_{10})$, getting $x_1, \\\\dots, x_5$ correct is still different from getting $x_6, \\\\dots, x_{10}$ correct, even though they both have \\\"50% accuracy\\\". The paper investigates CIFAR-10, where one can achieve >94% accuracy, but in cases like ImageNet where 70% accuracy is normal, these two 50% are certainly non-convergent to me. Also, I'm assuming the entire Figure 1 is on the simple 2-dimensional linear task? Does the phenomenon in Figure 1d repeat in high dimensionality? If so, what does it look like? (My experience with this suggests that if you keep stacking the same block, the activations will eventually oscillate, if not converge, but it could differ by initialization.)\\n\\nWe agree that our definition of the indices of iterative convergence, as based on accuracy, has certain limitations. We would argue that in some contexts, we actually only care about perturbations that affect accuracy. For instance, if the latter blocks of a ResNet only shift the final output by a constant, this does not matter if we are interested in the network\\u2019s label prediction.\\n\\nNevertheless, we agree with the reviewer that only using accuracy is a limitation and that this should be addressed. For this purpose, we have proposed a number of alternative definitions and laid them out in section B and Fig. 6 in the Appendix. To address the example with the incompatible predictions resulting in the same accuracy, we measured accuracy with respect to the unperturbed network rather than the ground truth. Fig. 6a demonstrates that this does not change our conclusions. Thank you for this example, which we have included as motivation in section B.\\n\\nMoreover, we could also use more nuanced measures of distance. For example, instead of accuracy, we could measure crossentropy, or we could simply measure the Euclidean distance between the intermediate and the final representation. Fig. 6b and 6c visualize the effect of the perturbations defining the Convergence and Divergence Index, respectively.\\n\\nThis allows us to answer the reviewer\\u2019s latter question. The fact that the Euclidean distance monotonously increases with additional evaluations of the last block suggests that for ordinary and gradient-coupled ResNets, the phenomenon in Fig. 1d indeed repeats in high dimensionality and the representation smoothly moves away from its final outcome. It is intriguing that this is different from the reviewer\\u2019s experience and it would be interesting to compare the conditions under which this trajectory begins to oscillate.\\n\\n> Some arguments are also a bit handwavy to me and I'd appreciate if the authors can expand on them. For example, in Section 3.2, the paper claims \\\"in contrast, the skip connections encourage a ResNet to use the same representational format across blocks... [and] are therefore better aligned with the final decoder\\\". As another example, the paper claims ResNets learn a balance between \\\"feedforward and iterative computations\\\". These are all intuitively reasonable arguments indeed, but considering that this is an empirical study paper, I think actually verifying these would make the paper stronger.\\n\\nWe have rewritten the latter part of the introduction and hope that the revised version more clearly outlines our motivation and arguments. In particular, we would like to clarify that the intuition that ResNets learn a balance of feedforward and iterative computations arises from their behavior under the perturbations detailed in section 3.3. This intuition motivates the hypothesis that iterative convergent behavior may provide a useful inductive bias. Since for our experiments, this hypothesis seems to be wrong, we would, if anything argue that the intuition that ResNets learn this balance is misleading.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your review, which has helped us better integrate our work into the existing literature.\\n\\n> Again, I think it is interesting to investigate the relationship between ResNets and iterative computations. But besides the canonical, plain unrolling of the layers that the authors have looked at, implicit models (i.e., models that study the continuous dynamics of a layer $f$) like Neural ODEs [1] and Deep Equilibrium Models [2] (there's a ResNet version of it) are both also looking at compact recurrent networks. In particular, the deep equilibrium models especially targets the convergence (i.e., the \\\"fixed point\\\" of the layer), and seems to demonstrate state-of-the-art level performances. In contrast to what the authors provided in the last paragraph of Section 1, I would therefore argue (based on Neural ODEs and deep equilibrium nets) that recurrence does offer some notable advantages like constant memory cost and analytical gradients. The other related thread of work is simply the classical recurrent backprop (RBP) theories, which study the convergence of recurrent networks and how one can leverage such property for the backward pass of these networks. I found the current version of the paper did not discuss either aspect of this, which I believe is important literature that actually is on the opposite side (partially) of what the authors are trying to claim.\\n\\nThank you for pointing us to these fascinating models. Though the focus of our investigation concerns whether iterative convergent behavior provides a useful inductive bias for ResNets, these methods that are more directly related to iterative methods are highly relevant in his context. We have added a paragraph in section 2 on this body of work and have clarified the particular focus of our own work.\\n\\n> There are actually many ways that I can think of to make recurrent residual blocks converge when you infinitely repeat it. For example, with spectral normalization [3], we can simply make the Jacobian of the block have an operator norm $<1$. Then Banach fixed point theorem will guarantee convergence. Other methods are also possible (e.g., via a provably convergent optimization perspective). These are not discussed in the paper (nor are they the main focus, I guess), but this doesn't mean that ResNets do not converge in general. The authors argue that \\\"some balance between feedforward and iterative computations might have been learned by the ResNets\\\", but there is actually a lot of noise in the analysis... for example, the networks could be overfitting, etc. The point is, as long as you regularize the model in that direction, the ResNets could still converge.\\n\\nThis is a really good point and, in fact, our rebuttal includes a new set of ResNet models whose residual functions are constrained using spectral normalization. These models demonstrate, as you stated, that the ResNets converge as long as the models are regularized in that direction. However, this convergence comes at the cost of worse performance, suggesting that the divergence of ordinary ResNets may not be a mere artifact of ordinary training.\"}",
"{\"title\": \"Response pt. 3: References\", \"comment\": \"Jastrz\\u0119bski, S., Arpit, D., Ballas, N., Verma, V., Che, T., & Bengio, Y. (2017). Residual Connections Encourage Iterative Inference. https://arxiv.org/abs/1710.04773v2\\n\\nWyatte, D., Jilk, D. J., & O\\u2019Reilly, R. C. (2014). Early recurrent feedback facilitates visual object recognition under challenging conditions. Frontiers in Psychology, 5, 674.\\n\\nSpoerer, C. J., McClure, P., & Kriegeskorte, N. (2017). Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition. Frontiers in Psychology, 8.\\n\\nYoshida, Y., & Miyato, T. (2017). Spectral norm regularization for improving the generalizability of deep learning. ArXiv Preprint ArXiv:1705.10941.\\n\\nLinsley, D., Ashok, A. K., Govindarajan, L. N., Liu, R., & Serre, T. (2020). Stable and expressive recurrent vision models. ArXiv:2005.11362 [Cs]. http://arxiv.org/abs/2005.11362\"}",
"{\"title\": \"Response pt. 2\", \"comment\": \"> Figure 1 is difficult to understand. Are you plotting activities against each other? What do the different dots represent for the feedforward model? Is the interaction in (d) meaningful or is this just by happenstance, and the divergence from x is the meaningful quality.\\n\\nFigure 1 depicts the two-dimensional estimates of the output according to the different considered algorithms. In the case of the feedforward model, each dot represents the readout after a certain intermediate stage. Since the subsequent layers are not aligned to each other, this early readout does not result in a useful estimate, as is apparent from the figure. This is in contrast to the intermediate estimate from the residual networks, which smoothly approach their final output. By interaction, do you mean that the two trajectories in (d) are roughly orthogonal to each other? In that case, this is pure happenstance; rather, the fact that both trajectories diverge from x is relevant. Thank you for raising these issues; we have attempted to explain the figure more clearly in the main text as well as the figure legend.\\n\\n> Why ResNet-104?\\n\\nWe chose 16 residual blocks per layer, because we considered this a representative depth for a typical ResNet. For example, He et al. (2016) consider a similarly deep ResNet-110. We expect these results to be robust across different depths. We would be happy to train network instances with a different number of residual blocks for the camera-ready version of the paper. (Note that we should have referred to the considered network as ResNet-101 instead of ResNet-104 and have now corrected this in the manuscript.)\\n\\n> When dropping ResNet blocks, how do you deal with the subsampling between layers? When you drop out the first layer do you also drop out the max pooling at that layer? Is there any chance that these distinctions in computations and resolutions between blocks that you're dropping could bias your observed results?\\n\\nThe subsampling between layers is implemented as part of an additional residual block, which is never removed as part of the perturbations. We have added a sentence in section A.3 to make this clearer. Note that the subsampling does not work by max pooling, but by direct subsampling with stride 2, just as in the original ResNet implementation.\\n\\n> Regarding the Divergence index, the authors should review [2]. I don't think a high divergence index necessarily means that the ResNet isn't learning the function of an RNN \\u2014 only that the learned function is not stable, which makes sense given ResNet hyperparams. This paper suggests that if you change the model nonlinearities to globally contractive ones like tanh or sigmoid (or use their algorithm) you'll control this problem.\\n\\nWe agree that a high divergence index does not necessarily mean that the ResNets is not learning a recurrent function. We appreciate the reviewer raising this issue and hope this is more clear in the reviewed manuscript. Instead, the Divergence Index is intended to be a measure of whether the learned function is stable, i. e. convergent. Indeed, the reviewer raises an important point here: even though higher coupling parameters generally lead to a lower Divergence Index, the considered recurrent and non-recurrent ResNets are all not convergent. Since we aim to study not only the impact of iterative, but also convergent behavior on the network\\u2019s inductive bias, this was a limitation. To address this, we have now introduced spectral normalization to define convergent ResNets in the revised manuscript. Our method is based on the upper bound on the Lipschitz constant of a convolutional neural network as introduced by Yoshida et al. (2017), but a similar network could be defined using the method by Linsley et al. (2020). We discuss their relationship in the last paragraph of section 2.\\n\\n> The gradient coupling is forcing a fixed combination between the gradients of successive layers. But gated RNNs are standard for recurrent vision models, and these do not have such a constraint. Is it possible that the ResNet without shared weights is learning the function of a gated RNN rather than the vanilla RNNs that you're comparing to here?\\n\\nThat is an interesting question. We think it is, in principle, possible for a ResNets to learn the function of a gated RNN or at least approximate it very well on the data distribution. This could be even more easily implemented by a highway network, which, depending on its implementation, is actually its non-recurrent generalization. Coupling the gradients of a highway network would allow us to interpolate between non-recurrent highway networks and gated RNNs. This could induce a useful inductive bias and would certainly constitute an interesting investigation for the future.\"}",
"{\"title\": \"Response pt. 1\", \"comment\": \"Thank you for your helpful review. Your questions helped us a lot in revising the explanations in our manuscript. Below we directly answer these questions and detail how we attempted to address the issues you have raised.\\n\\n> This idea of the visual system acting as a generative model is still far from worked out, and I'd prefer the authors hedge their language on it.\\n\\nWe agree with this and have modified our language accordingly.\\n\\n> There's also other tasks that are more closely linked to recurrent processing than the ones studied here. For example, I recommend the authors experiment with Pathfinder or cABC of [1], which are solved in fewer samples by recurrent networks vs. feedforward models. This would allow you to plot the amount of gradient coupling on one axis, and the number of samples need to solve (e.g.) Pathfinder on the other axis, which I think would be very elegant.\\n\\nThank you for suggesting these datasets. Evaluating the different models on these tasks would indeed be interesting and provide important insights into whether recurrence regularization can provide a useful inductive bias for ResNets. Since these images, at 300 x 300 pixels, are much larger than the comparably small datasets we have considered so far, two weeks were not enough time to adapt the ResNets to this larger task.\\n\\nHowever, we have included another task, which is also more closely linked to recurrent processing than Cifar-10 and MNIST: Digitclutter consists of a number of partially occluded digits. Recognizing occluded stimuli often requires recurrent processing in humans (Wyatte et al., 2014) and convolutional neural networks have been shown to benefit from recurrent connections under this task (Spoerer et al., 2017). We have significantly extended our experiments using this dataset, examining versions ranging from two to five partially occluded digits. For all these tasks, recurrence regularization did not improve performance. We now present these results in the main text (section 5.3) and have added the paragraph \\u2018Inductive bias of recurrent operations on visual tasks\\u2019 to section 2 in order to motivate their significance. This paragraph also includes the relevant literature on Pathfinder and cABC. \\n\\nWe agree that examining the relationship between recurrence regularization and performance on Pathfinder and cABC would be very interesting. We hope that our experiments on Digitclutter provide a better intuition for the ResNets\\u2019 performance on tasks which have been linked to recurrent processing.\\n\\n> I appreciate the authors laying out their hypotheses like they did, but I found the language to be indirect. Is there a simpler way to motivate these hypotheses?\\n\\\"Our findings also suggest, however, that deep feedforward computations may not be characterized as iterative refinement on a latent representation, but, at most, as non-iterative refinement on this representation.\\\" How do you refine non-iteratively/incrementally? More generally, I felt like the authors overloaded \\\"iterative\\\" and the manuscript would benefit from a more careful treatment of the exact computations they're /referring to. Give concrete examples.\\nAre you comparing ResNets to RNNs or iterative algorithms (as are alluded to in the intro)? I am confused by the motivation here, which changes from paragraph to paragraph.\\n\\nThank you for raising this issue. We have rewritten the latter part of the introduction and hope that our revised motivation is stated more clearly. More specifically, our investigation was motivated by the observation that the feedforward computations within ResNets have certain similarities to the recursive operations of an iterative method: throughout the layers the representation is gradually refined, slowly approaching its final state. It has previously been proposed that this iterative refinement may be part of the reason for ResNets\\u2019 good performance on many computer vision tasks (Jastrz\\u0119bski et al., 2018). This suggests that iterative convergent behavior may be a useful inductive bias for ResNets. In our article, we aim to investigate whether this is the case.\\n\\nWe appreciate the reviewer detailing why the terminology and motivation of the original manuscript had been confusing. We have revised the corresponding parts of the manuscript and we hope that our motivation and terminology are now more clearly laid out.\"}",
"{\"title\": \"Response pt. 2: References\", \"comment\": \"Jastrz\\u0119bski, S., Arpit, D., Ballas, N., Verma, V., Che, T., & Bengio, Y. (2017). Residual Connections Encourage Iterative Inference. https://arxiv.org/abs/1710.04773v2\\n\\nWyatte, D., Jilk, D. J., & O\\u2019Reilly, R. C. (2014). Early recurrent feedback facilitates visual object recognition under challenging conditions. Frontiers in Psychology, 5, 674.\\n\\nSpoerer, C. J., McClure, P., & Kriegeskorte, N. (2017). Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition. Frontiers in Psychology, 8.\\n\\nYoshida, Y., & Miyato, T. (2017). Spectral norm regularization for improving the generalizability of deep learning. ArXiv Preprint ArXiv:1705.10941.\"}",
"{\"title\": \"Response\", \"comment\": \"Thank you for your insightful review. In our rebuttal, we have attempted to address the points you have raised by running new experiments and revising our manuscript.\\n\\n> The motivation for this paper is somewhat weak, or at least weakly justified. The authors define an iterative method/algorithm as one that uses repeated iterations and convergences to a solution. It is then hypothesized that such behavior might be a good inductive bias for neural networks, but it is not discussed why this might be expected. After all, iterative algorithms are designed to converge after a (non-fixed) number of steps, and neural nets are not. I think that either the motivation should be justified better, or it is simply a question without a strong motivation (this doesn't necessarily make it unimportant, just less important).\\n\\nThank you for pointing this out. Our investigation was motivated by the observation that the feedforward computations within ResNets have certain similarities to the recursive operations of an iterative method: throughout the layers the representation is gradually refined, slowly approaching its final state. It has previously been proposed that this iterative refinement may be part of the reason for ResNets\\u2019 good performance on many computer vision tasks (Jastrz\\u0119bski et al., 2018). This suggests that iterative convergent behavior may be a useful inductive bias for ResNets. We have rewritten the latter part of the introduction and hope that our revisions have clarified our motivation.\\n\\n> Moreover, if we'd like the outputs of the neural network to be \\\"stable\\\" for reasons other than metrics like accuracy, there are certain methods and lines of research on this subject, such as Ciccone et al. cited by the authors. Those works already claim that computations learned by ResNets are not stable, and suggest methods to make them so. Doesn't that make the question investigated (whether ResNets learn iterative convergent behavior) in this paper somewhat redundant?\\n\\nThank you for raising this issue, it has helped us refine the presentation of our motivation as we have detailed above. In particular, we have emphasized our investigation into whether iterative convergent behavior is a useful inductive bias for ResNets. Though higher gradient coupling parameters also increase the Convergence Index and decrease the Divergence Index, this method largely focuses on making ResNets more iterative. We therefore complement our recurrence regularization by a convergence regularization using prior work on Lipschitz bounds on convolutional neural networks (Yoshida et al., 2017). Section 4.2 defines our new convergent ResNets and sections 5.2 and 5.3 summarise our findings on their performance. To summarise the findings, this convergence regularization negatively impacts performance, as well, providing further evidence that ResNets may not benefit from iterative convergent behavior.\\n\\n> Finally, the negative experimental results are interesting but I think they need to be stronger to be convince a reader that this is a result that can be expected to generalize. Due to the simplicity of the datasets (and no confidence intervals on the numbers in Table 1), evidence for the negative impact of encouraging iterative convergent behavior on performance is still preliminary. More datasets and tasks of higher complexity will certainly help here.\\n\\nWe agree that it is difficult to demonstrate this result and have stated more carefully that our negative results are limited to the tasks we have considered. Thank you for proposing additions that may convince a reader of the generality of the results. To convey the performance variation across several training instances, we now plot performance in a scatterplot, using mean and standard deviation as summary statistics (Figure 4). This demonstrates that performance is quite consistent across several instances. We have also added more datasets to our investigation. First of all, our results replicate on CIFAR-100 (see Fig. 13).\\n\\nMoreover, the Digitclutter dataset requires the ResNets to recognize partially occluded digits. Such tasks involve recurrent processing in humans (Wyatte et al., 2014) and benefit from recurrent connections in certain convolutional neural networks (Spoerer et al., 2017). If there are datasets for which recurrence regularization provides an advantage for ResNets, we may therefore expect Digitclutter to be among them. We have significantly extended our results on Digitclutter, demonstrating that recurrence regularization does not provide a better inductive bias for datasets with a wide range of complexity. We now present parts of these results in the main text (section 5.3) and have added the paragraph \\u2018Inductive bias of recurrent operations on visual tasks\\u2019 to section 2 in order to motivate their significance. We hope that these experiments will make our findings more generally applicable.\"}",
"{\"title\": \"General remarks\", \"comment\": \"We would like to thank the reviewers for their insightful questions and helpful suggestions. The reviews have helped us significantly improve our manuscript during the rebuttal. We are glad that the reviewers found the topic of the article to be important and interesting and generally considered our proposed indices to be useful.\\n\\nWe respond to each reviewer individually, but would like to give an overview over the most important changes to the manuscript:\\n\\nAll reviewers noted that they found the motivation to be presented overly complicated, which made the results more difficult to understand. We have now attempted to lay out the motivation significantly simpler. In particular, the previous observations that a ResNet shares certain properties with an iterative method have motivated the hypothesis that ResNets approximate iterative methods. Iterative convergent behavior may therefore provide a useful inductive bias for ResNets. Iterative methods are characterized by two properties, iteration and convergence, that we manipulate in ResNets via soft gradient coupling and spectral normalization.\\n\\nAll reviewers suggested that it was not clear why we expected soft gradient coupling to make ResNets more convergent. We clarified that it is indeed unsurprising that the recurrent ResNets are divergent and have now included a new method of convergence regularization. This method results in a weaker performance of our trained ResNets, suggesting that convergent behavior may not be a useful inductive bias.\\n\\nFinally, all reviewers suggested we explore more tasks. For our rebuttal, we added experiments on CIFAR-100. Moreover, we more extensively evaluated the performance of gradient-coupled ResNets on Digitclutter. In this task, the network must recognize the identity of a number of digits, which partially occlude each other. This task and related versions have been demonstrated to benefit from recurrent processing in humans (Wyatte et al., 2014) and artificial algorithms (Spoerer et al., 2017).\\n\\nOnce again, we would like to thank the reviewers for their work and look forward to a potential further discussion of our findings.\\n\\n\\n\\nWyatte, D., Jilk, D. J., & O\\u2019Reilly, R. C. (2014). Early recurrent feedback facilitates visual object recognition under challenging conditions. Frontiers in Psychology, 5, 674.\\n\\nSpoerer, C. J., McClure, P., & Kriegeskorte, N. (2017). Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition. Frontiers in Psychology, 8.\"}",
"{\"title\": \"Concerns about motivations and results\", \"review\": \"## Paper Summary\\n\\nThis paper studies the correspondence between residual networks and iterative algorithms that repeat computations and converge to a solution. The authors suggest that residual networks can in principle implement such iterative algorithms and experimentally show that networks trained in practice do not naturally learn them. They also define three indices to quantify the degree to which a ResNet shows properties of iterative algorithms. Finally, they show that while soft gradient coupling across layers within stages can ensure that learned ResNets behave more like iterative algorithms, this does not appear to provide a useful inductive bias for image classification tasks.\\n\\n## Strengths\\n\\nStudying the nature of programs that are learned by neural networks of various architectures is an interesting and important research problem. This paper makes a contribution to it by examining the extent to which ResNets implement algorithms similar to iterative solvers.\\n\\nThe authors define numerical indices to formalize the criteria for \\\"iterative-ness\\\" that they are looking for, which are useful for comparisons.\\n\\nThe paper contains a negative result about the utility of forcing iterative behavior on ResNets using the proposed gradient coupling trick. This negative result may be useful to researchers interested in similar ideas in the future.\\n\\n## Weaknesses\\n\\nThe motivation for this paper is somewhat weak, or at least weakly justified. The authors define an iterative method/algorithm as one that uses repeated iterations and convergences to a solution. It is then hypothesized that such behavior might be a good inductive bias for neural networks, but it is not discussed why this might be expected. After all, iterative algorithms are designed to converge after a (non-fixed) number of steps, and neural nets are not. I think that either the motivation should be justified better, or it is simply a question without a strong motivation (this doesn't necessarily make it unimportant, just less important).\\n\\nMoreover, if we'd like the outputs of the neural network to be \\\"stable\\\" for reasons other than metrics like accuracy, there are certain methods and lines of research on this subject, such as Ciccone et al. cited by the authors. Those works already claim that computations learned by ResNets are not stable, and suggest methods to make them so. Doesn't that make the question investigated (whether ResNets learn iterative convergent behavior) in this paper somewhat redundant?\\n\\nFinally, the negative experimental results are interesting but I think they need to be stronger to be convince a reader that this is a result that can be expected to generalize. Due to the simplicity of the datasets (and no confidence intervals on the numbers in Table 1), evidence for the negative impact of encouraging iterative convergent behavior on performance is still preliminary. More datasets and tasks of higher complexity will certainly help here.\\n\\n## Review Summary\\n\\nSince the paper lacks strong motivations, clear significance and highly convincing results, I am currently unable to recommend an acceptance. If the authors can elaborate on their motivations and discuss them in light of related work as mentioned above, I am willing to reconsider my score. \\n\\n## After Author Response\\n\\nI think the changes to the paper have improved it. In particular, the reference to Spoerer et al. gives more weight to the motivations of the paper. I'm increasing my score slightly as a result.\\nHowever, the relation to prior work remains hazy. The model of Ciccone et al., for example, has shared weights, stability over increased depth, and still performs as well as ResNets generally. Doesn't this go against the conclusions of this paper?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"\", \"summary\": \"This paper investigates the extent to which the computations implemented by an optimized ResNet resemble those of a recurrent network. They develop new tools to study this question, and find evidence that ResNet performance is *hurt* when it is forced to act like a specific kind of RNN.\", \"strengths\": \"I am excited that the authors are re-examining the assumptions of now classic work likening RNNs to ResNets. They develop elegant tools for continuously transforming between the two, and compare performance on several image classification tasks.\", \"weaknesses\": \"I'm not totally convinced by the author's pitch for inverse problems. This idea of the visual system acting as a generative model is still far from worked out, and I'd prefer the authors hedge their language on it. There's also other tasks that are more closely linked to recurrent processing than the ones studied here. For example, I recommend the authors experiment with Pathfinder or cABC of [1], which are solved in fewer samples by recurrent networks vs. feedforward models. This would allow you to plot the amount of gradient coupling on one axis, and the number of samples need to solve (e.g.) Pathfinder on the other axis, which I think would be very elegant.\\n\\nI appreciate the authors laying out their hypotheses like they did, but I found the language to be indirect. Is there a simpler way to motivate these hypotheses?\\n\\nFigure 1 is difficult to understand. Are you plotting activities against each other? What do the different dots represent for the feedforward model? Is the interaction in (d) meaningful or is this just by happenstance, and the divergence from x is the meaningful quality. \\n\\nWhy ResNet-104?\\n\\nWhen dropping ResNet blocks, how do you deal with the subsampling between layers? When you drop out the first layer do you also drop out the max pooling at that layer? Is there any chance that these distinctions in computations and resolutions between blocks that you're dropping could bias your observed results?\\n\\nRegarding the Divergence index, the authors should review [2]. I don't think a high divergence index necessarily means that the ResNet isn't learning the function of an RNN \\u2014 only that the learned function is not stable, which makes sense given ResNet hyperparams. This paper suggests that if you change the model nonlinearities to globally contractive ones like tanh or sigmoid (or use their algorithm) you'll control this problem.\\n\\nThe gradient coupling is forcing a fixed combination between the gradients of successive layers. But gated RNNs are standard for recurrent vision models, and these do not have such a constraint. Is it possible that the ResNet without shared weights is learning a the function of a gated RNN rather than the vanilla RNNs that you're comparing to here?\\n\\n\\\"Our findings also suggest, however, that deep feedforward computations may not be characterized as iterative refinement on a latent representation, but, at most, as non-iterative refinement on this representation.\\\" How do you refine non-iteratively/incrementally? More generally, I felt like the authors overloaded \\\"iterative\\\" and the manuscript would benefit from a more careful treatment of the exact computations they're /referring to. Give concrete examples.\\n\\nAre you comparing ResNets to RNNs or iterative algorithms (as are alluded to in the intro)? I am confused by the motivation here, which changes from paragraph to paragraph.\\n\\n[1] Kim et al. Disentangling neural mechanisms for perceptual grouping. ICLR 2020.\\n[2] Linsley et al. Stable and expressive recurrent vision models. NeurIPS 2020.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Interesting investigation but I found the metrics and conclusion not quite convincing\", \"review\": \"> Summary: This paper investigates the relationship between deep ResNets and (implicitly) iterative computations. The authors introduce two main hypotheses that are at the core of the investigation: 1) whether the iterative inductive bias improves ResNet performance; and 2) whether recurrent ResNets are more parameter-efficient. The paper also proposes three metrics for studying the convergence and divergence behaviors of these networks in order to investigate this matter.\\n\\n----------------\", \"post_rebuttal_thoughts\": \"I would like to thank the authors for their detailed response and the revisions made to the paper. I'm updating my score to 5 as part of my concerns are satisfactorily addressed, and I wished I could have more opportunities to discuss with the authors on their response. In general, my opinion is that the authors have introduced too many \\\"artificial\\\" components to the study (e.g., soft gradient coupling, the convergence/divergence indices) that make me slightly dubious of how generalizable this characterization is. For example, as the authors indicated, spectral normalization creates a different phenomenon (at a cost of worse performance), but with no change to the structure itself (so unlike the soft gradient coupling), a different phenomenon could be challenging the conclusion of the paper.\\n\\nMy suggestion would be that the authors delve deeper into the observations here and better integrate the revisions with their original approach (e.g., the high-dimensional discussion; the spectral normalization discussion, etc.)\\n\\n----------------\\n \\n- I feel that in the rebuttal phase the authors made certain important new edits to the paper (e.g., \\n\\nMy general opinion is that this paper investigates an interesting direction on the learning behavior of ResNets, but is still not quite ready for publication in a venue like ICLR. There is an obvious gap in related work (see my detailed comment below) on implicit deep networks; moreover, the definition of the various indices (e.g., convergence index) is also rather confusing to me. The empirical results are not strong enough evidence, in my view, to make most of the claims conclusively. I also have some doubts on the motivation for the methodology that the authors are using.\", \"pros\": \"1. Interesting direction; as the author shows, the ResNet architecture itself is expressive enough for implementing iterative computations/algorithms. So it is worthwhile to study its behavior along this trail.\\n2. The paper is overall written in a clear manner and the author explained their methodology well.\", \"cons\": \"1. Many arguments are too hand-wavy and I don't particularly find the metrics the authors define to analyze convergence/divergence particularly convincing. (See my comment below)\\n2. The experimental setup is mostly on small scales.\\n3. Even with the small scale setups, the experimental results don't seem conclusive enough (at least to me) to draw the conclusion that the authors were trying to claim. The verification of hypothesis 2 is especially hasty.\\n4. The motivation behind the soft gradient coupling is not clear to me.\\n5. There is a clear missing gap in the related work that I think the authors should pay attention to.\\n\\n--------------------------------------\\n\\nI will expand on some of the Cons above, and provide the following detailed comments/questions:\\n\\n1. Again, I think it is interesting to investigate the relationship between ResNets and iterative computations. But besides the canonical, plain unrolling of the layers that the authors have looked at, *implicit models* (i.e., models that study the continuous dynamics of a layer $f$) like Neural ODEs [1] and Deep Equilibrium Models [2] (there's a ResNet version of it) are both also looking at compact recurrent networks. In particular, the deep equilibrium models especially targets the convergence (i.e., the \\\"fixed point\\\" of the layer), and seems to demonstrate state-of-the-art level performances. In contrast to what the authors provided in the last paragraph of Section 1, I would therefore argue (based on Neural ODEs and deep equilibrium nets) that recurrence does offer some notable advantages like constant memory cost and analytical gradients. The other related thread of work is simply the classical recurrent backprop (RBP) theories, which study the convergence of recurrent networks and how one can leverage such property for the backward pass of these networks. I found the current version of the paper did not discuss either aspect of this, which I believe is important literature that actually is on the opposite side (partially) of what the authors are trying to claim. \\n\\n2. There are actually many ways that I can think of to make recurrent residual blocks converge when you infinitely repeat it. For example, with spectral normalization [3], we can simply make the Jacobian of the block have an operator norm $<1$. Then Banach fixed point theorem will guarantee convergence. Other methods are also possible (e.g., via a provably convergent optimization perspective). These are not discussed in the paper (nor are they the main focus, I guess), but this doesn't mean that ResNets do not converge in general. The authors argue that \\\"some balance between feedforward and iterative computations might have been learned by the ResNets\\\", but there is actually a lot of noise in the analysis... for example, the networks could be overfitting, etc. The point is, as long as you regularize the model in that direction, the ResNets could still converge.\\n\\n3. One main problem that I found about this paper is its definition of the convergence/divergence indices. The \\\"convergence\\\" concept in this paper is constrained to look at the accuracy convergence, by which the authors look at the inverse of the AUC of the classification rate curve. But given the nature of softmax and classification task itself, I don't think a convergence in accuracy is a good \\\"index\\\" for measuring convergence of an architecture, which Section 3.1 looks at (for $\\\\hat{z}_i^{(t)}$). For example, softmax is constant up to a shift of constant. And for classification of, let's say 10 objects $(x_1, \\\\dots, x_{10})$, getting $x_1, \\\\dots, x_5$ correct is still different from getting $x_6, \\\\dots, x_{10}$ correct, even though they both have \\\"50% accuracy\\\". The paper investigates CIFAR-10, where one can achieve >94% accuracy, but in cases like ImageNet where 70% accuracy is normal, these two 50% are certainly non-convergent to me. Also, I'm assuming the entire Figure 1 is on the simple 2-dimensional linear task? Does the phenomenon in Figure 1d repeat in high dimensionality? If so, what does it look like? (My experience with this suggests that if you keep stacking the same block, the activations will eventually oscillate, if not converge, but it could differ by initialization.)\\n\\n4. Some arguments are also a bit handwavy to me and I'd appreciate if the authors can expand on them. For example, in Section 3.2, the paper claims \\\"in contrast, the skip connections encourage a ResNet to use the same representational format across blocks... [and] are therefore better aligned with the final decoder\\\". As another example, the paper claims ResNets learn a balance between \\\"feedforward and iterative computations\\\". These are all intuitively reasonable arguments indeed, but considering that this is an empirical study paper, I think actually verifying these would make the paper stronger.\\n\\n5. About the soft gradient coupling, doesn't this simply mix the gradients and inject more stochasticity to them? In general, would you expect (when $0 < \\\\lambda < 1$) that just like in typical SGD, this stochasticity will be averaged out by the optimization procedure of deep networks? Since the $\\\\tilde{\\\\Delta}_t$ no longer fully reflect the mini-batch gradient descent direction, have you checked how the block parameters within the same stage gradually deviate from one another as you optimize the network (e.g., how does the standard deviation of $\\\\mathbf{W}_l^{(s)}$ over all layer $l$'s in the same $s$ change over training iterations? Do they deviate or stay around? If these weights are eventually still different, why can one still consider them to be \\\"similar\\\" (other than the RI metric, which I find to be a debatable metric given the #3 above...)? \\n\\n6. For the EPC, have you computed the EPC of an ordinary ResNet and a purely recurrent ResNet? How do their EPC look like when compared to the soft gradient coupled ResNets (e.g., $\\\\lambda=0.5$)?\\n\\n7. In Section 5.3, the paper claims that \\\"if this is the case, we would expect soft gradient coupling to find such a solution.\\\" Why? And isn't a soft gradient coupled ResNet still a non-recurrent ResNet (in the sense that you can't simply unroll a single layer to get the output; you still need to store all parameters of the network, rather than only a single layer of it)?\\n\\n--------------------------------------------\\n\\nI look forward to the authors' response on my questions/comments above. I'm happy to consider adjusting my score accordingly.\\n\\n\\n[1] https://arxiv.org/abs/1806.07366\\n[2] https://arxiv.org/abs/1909.01377\\n[3] https://arxiv.org/abs/1802.05957\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review for \\\"Evidence against implicitly recurrent computations in residual neural networks\\\"\", \"review\": \"This paper presents an empirical study that characterizes and quantifies the implicitly recurrent nature of residual networks (ResNets). A ResNet can be construed as a general formulation of a recurrent neural network (RNN) unfolded for a fixed number of time steps. In particular, the authors propose \\\"soft gradient coupling,\\\" a novel way to control the degree of weight-sharing between the different residual blocks. This gives them the ability to smoothly interpolate between a \\\"no weight sharing\\\" scenario to a \\\"full weight sharing scenario.\\\" Soft gradient coupling imparts the ability to share similar \\\"computations\\\" without necessarily sharing the same weights. They introduce metrics such as convergence, recurrence, divergence indices, and an effective parameter count to quantify \\\"iterative\\\" behavior numerically. Finally, they also test the impact of \\\"iterative\\\" computations in a ResNet trained for non-trivial visual recognition tasks.\", \"pros\": \"The problem that the authors tackle is undoubtedly interesting and useful. This is particularly true in light of a growing literature analyzing ResNets as discretized dynamical systems. The demonstration that ResNets can express iterative algorithms but do not learn such algorithms by default is intuitive and powerful. The authors use a simple toy example (a linear function) to articulate the desiderata and then work with larger datasets.\\n\\nThough the manipulations to test iterative computations and implicit recurrence (early read out to determine convergence; residual block drop outs to determine recurrence; and repeated application of residual blocks to determine divergence) are not entirely novel, they are applied quite aptly. The most salient observation is that of divergence and is reminiscent of stability analysis of RNNs for vision. The notion that ResNets learn to tradeoff (and balance) feedforward vs. iterative computations is an interesting proposal. \\n\\nThe proposed \\\"soft gradient coupling\\\" scheme allows for different residual blocks to implement similar \\\"computations\\\" without necessarily sharing weights or changing the primary optimization problem. This is an interesting suggestion. \\n\\nThis paper also presents fairly extensive numerical experiments.\", \"cons\": \"The strong conclusion that ResNets do not benefit from recurrence regularization is premature, given the current set of experiments presented in this manuscript. As the authors themselves point out, \\\"iterative computation\\\" is an inductive bias. However, there is little reason to believe that this inductive bias is the right one for a classification problem. Have the authors tried to consider problems other than image classification? For instance, there has been recent literature on the relevance of iterative computation (visual routines) for contour detection and segmentation problems. Moreover, \\\"accuracy\\\" is not the only way to quantify the benefit. Have the authors tried to measure sample efficiency? i.e., can a ResNet employing iterative computations learn from fewer training samples than a non-iterative ResNet?\\n\\nHow does the performance benchmarking of a \\\"fully recurrent\\\" ResNet compare to a comparable-sized LSTM/GRU trained on this task? Or even a weight-shared ResNet? These comparisons seem to be necessary to discern if the soft gradient coupling is introducing other artificial biases.\\n\\nIt is unclear why the authors believed that a high degree of soft-gradient coupling would help with the divergence issue in the first place. Implementing the same computation repeatedly only converges (and stays there) given certain other properties of the transformation function applied (for instance, the spectral radius of each Residual block's Jacobian). There is quite a bit of theoretical/empirical work in the literature in this regard.\\n\\nThe manuscript would benefit from some discussion on theoretical results from the RNN literature that outline necessary and sufficient conditions for the forward pass of RNNs to behave like convergent dynamical systems. Given this paper's focus on iterations, convergence, and divergence, this body of work seems relevant.\", \"minor\": \"(Fig. 3a) The recurrence index was normalized, yet there are points with a recurrence index greater than 1. Is there any explanation for this?\\n\\nThe recurrence index measure also does not seem to add much value. (Fig 2a,b; second to left panel)\\n\\nAre the values reported in Table 1., for example, point estimates? Did the authors estimate some confidence intervals on these values by running a few repeats of the experiments?\", \"clarity\": \"(Pg. 2) \\\"Encouraging iterative behavior in this way therefore does not improve the inductive bias\\\": Is not iterative behavior *the* inductive bias?\\n\\n(Fig. 3a) There must be a discussion on the non-monotonicity of these curves (especially convergence/divergence).\\n\\nThe paper can do with a through reformatting of the reference list to make all entries consistent in citation style (for ex: including URLs, DOIs, proper and consistent journal/conference abbreviations, etc.)\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
JzG0n48hRf | Uncertainty for deep image classifiers on out of distribution data. | [
"Tiago Salvador",
"Alexander Iannantuono",
"Adam M Oberman"
] | In addition to achieving high accuracy, in many applications, it is important to estimate the probability that a model prediction is correct. Predictive uncertainty is particularly important on out of distribution (OOD) data where accuracy degrades. However, models are typically overconfident, and model calibration on OOD data remains a challenge. In this paper we propose a simple post hoc calibration method that significantly improves on benchmark results [Ovadia et al 2019] on a wide range of corrupted data. Our method uses outlier exposure to properly calibrate the model probabilities. | [
"uncertainty",
"confidence",
"out of distribution",
"outlier exposure",
"classification"
] | Reject | https://openreview.net/pdf?id=JzG0n48hRf | https://openreview.net/forum?id=JzG0n48hRf | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"3cAD2ZgivY",
"7eC_4j7tYVE",
"tFVyHMc0Lt",
"VAtQRROc7Sr",
"y-1q0sduTp",
"_2aO9lgIgv4",
"1AQnDQ8H7pF",
"QastWpMlJ7P",
"wQOb6VeBda",
"fkn1w5z6TnF",
"Kppxu78l1q",
"P_phbTML3Bk"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040415047,
1606258405581,
1605579903360,
1605579844536,
1605579815069,
1605579446663,
1605579283214,
1605579055378,
1603942835060,
1603896738154,
1603832917994,
1602831211998
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3596/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3596/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3596/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3596/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3596/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3596/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3596/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3596/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3596/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3596/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3596/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper presents a method to improve the calibration of neural networks on out-of-distribution (OOD) data.\\n\\nThe authors show that their method can be applied post-hoc to existing methods and that it improves calibration under distribution shift using the benchmark in Ovadia et al. 2019.\\n\\nHowever, reviewers felt that the theoretical justification for why this works is unclear (see detailed comments by R1 and R4), and some of the choices are not well-justified. Revising the paper to address these concerns with additional theoretical and/or empirical justifications should improve the clarity and strengthen the paper.\\n\\nI encourage the authors to revise and resubmit to a different venue.\"}",
"{\"title\": \"Final Revision\", \"comment\": \"Dear Reviewers and Area Chair,\\n\\nWe have made a final revision on our paper. For your convenience we highlighted all the new changes in red, with the changes in the first revision in blue.\\n\\nIn particular, we would like to draw your attention to the new Figures 3 and 11 and the extra discussion in the Appendix. We hope that these not only help explain how the methods work, but also why they work. We also updated Figure 9, as we notice we did not compile the correct figure in our latex code. While the changes are almost negligible and the conclusions remain unchanged, we felt we should still mention it.\\n\\nWe hope these additional explanations, as well as the previous ones, help you reconsider your ratings, like it did for Reviewer 3 after our first revision. Using a purely statiscal approach, we have proposed a simple and intuitive method (**R3**) that is fast to compute (**R1**), requiring no training of the Neural Net classifier, and that consistently improves calibration across different corruptions (**R1**,**R3**,**R4**).\"}",
"{\"title\": \"Revision Submitted and general remarks\", \"comment\": \"Dear reviewers,\\n\\nThank you for your time and remarks. For your convenience we highlighted all the changes in the pdf in blue. We have addressed your comments and concerns by replying individually to each one of you and we hope the additional explanations and experiments have addressed your previous questions.\", \"we_would_like_to_emphasize_the_following\": [\"Our proposed methods are indeed remarkly simple and rely on a purely statistical approach, requiring no additional training. The cost of the methods relies solely on binning the data into histogram thus making it very fast to compute. We shared the code in the suplemental material.\", \"We do not rely on extra information to solve the problem. Only one corruption is used to form the calibration sets (contrast is used in the paper and in the revised version of the paper we include in the Appendix a discussion regarding use of other corruptions). At test time, we evaluate the calibration of the models on corruptions (brightness, glass blur, speckle noise, etc) never seen by the model. This is an important point that we believe may be the source of some misunderstanding regarding our method. In the revised version, we made sure to emphasize this point throughout the paper.\", \"Please let us know if any further clarifications would help you reconsider your rating.\"]}",
"{\"title\": \"Minor points addressed\", \"comment\": \"> - it should probably be $L_i(p_{max})/...$ and not $L_j(p_{max})/...$ in the equation in paragraph \\\"Single Image Method\\\" (i).\\n\\nThat is correct.\\n\\n> - It might be better to rename $P_m^{deploy}$ as $P_m^{test}$, to confirm to the standard terminology of train/validation/test data.\\n\\nWe agree and made the change.\\n\\n> - in \\\"of p_max under each of the calibration models, using the probability density\\\", I think it is easier to understand if \\\"calibration models\\\" -> \\\"calibration sets\\\".\\n\\nChanged.\"}",
"{\"title\": \"Unclear/weak points addressed\", \"comment\": \"> - The proposed method relies on set of pre-specified set of corruptions that are used to generate the validation data. Therefore, I think it is important to throughly evaluate the impact when the corruption used for validation is different from the one in the test data. However, results are only reported when the validation corruption is \\\"Contrast\\\". Furthermore, it seems the results reported are the ones on the test data averaged over several types of corruption. However, it would be interesting to see for what types of test corruption the method work/not works.\\n\\nWe believe there may be a misunderstanding here. The type of corruption in the calibration/validation set is always different from the corruptions in the test set. For the results presented, only the 'Contrast' corruption is used in the validation set. For the test set, we use all the remaining corruptions (noise, glass blur, brightness, etc). \\n\\nTherefore, in Figure 1, we report the average over several types of corruptions (noise, blur, brightness, etc ) with 'constrast' corruption being used in the validation set . In Figure 3, we expand on this using box-whisker plots which allows us visualize the variation across the different types of corruptions, again with 'constrast' being used only in the validation set. We do agree that it is important to see for which types of corruption the method works or not so we have added Tables 5 and 6 in the Appendix for that matter.\\n\\nMoreover, we added a discussion in the Appendix of how the choice of the corruption used in the calibration set affects the results. The idea is to choose a corruption that leads to calibration sets whose accuracy slowly decreases in the presence of corrupted images at higher levels, while remaining calibrated for clean images.\\n\\n> - It would be interesting to see results when the test data is completely OOD data, like using SVHN dataset for testing predictions of a model that was trained on CIFAR-10 (see Ovadia et al, 2019).\\n\\nThank you for the suggestion. We added this in Figure 7. We also added results for MNIST trained models evaluted on Fashion-MNIST and Not-MNIST in Figure 8.\\n\\n> - Some intuition/analysis should be given about the proposed method. For example, in \\\"Single Image Method\\\" (i) and (ii): My understanding is that the authors want to achieve that if a test sample is corrupted by type A, then q_i(max) should be close to one for the calibration set which type of corruption is similar to type A.\\n\\nThat is precisely it. When introducing the methods in section 3.1, we added additional explanations in the revised version of the paper.\\n\\n> - At least in the Appendix: Some more formal description of how Equation (3) is calculated should be given\\n\\nWe added an explanation as to how Equation (3) can be computed in practice.\\n\\n> - The formula for computing $L_j(p_{max}(x))$ in \\\"Single Image Method\\\" (i) should be given.\\n\\nThe previous presentation allowed for a more general definition of the method where $L_j(p_{max}(x))$ can be we defined as a function of $h^{CAL,j}(x)$. Since we simply take $L_j(p_{max}(x))=h^{CAL,j}(x)$, we simplified the presentation accordingly.\\n\\n> - The notation is slightly unclear, what does the \\\"$m$\\\" in $P_m^{CAL,j}$ mean?\\n\\nThere should be no subscript $m$, which we believe was source of the confusion. We have also simplified the notation (see answer above), hopefully it is clearer now.\\n\\n> - I am not sure what this means: \\\"on each of the calibration sets determined by combinations of intensity given by: {0}, {5,0}, {5,4,0}, {5,4,3,0}, {5,4,3,2,0}, and {5,4,3,2,1,0}.\\\"\\n\\nIt means we have a total of 6 calibration sets. The set {0} refers to the calibration set with clean images only, {5,0} with clean images and corrupted images with intensity 5, and so on. Given the feedback provided by the referees, in the new version we adopt a much simpler and intuitive choice: {0}, {0,1}, {0,2}, {0,3}, {0,4}, {0,5}. \\n\\n> - What are sizes of each calibration set in the experiments? How many calibration sets are there?\\n\\nThere are a total of 6 calibration sets. The one containing only clean images, {0}, has 5000 images, while the others {0,1}, {0,2}, {0,3}, {0,4}, {0,5} have 10000 images each: 5000 clean images (the same as in {0}) and their corrupted counterparts. This is true for both CIFAR-10 and ImageNet. This information can be found in section 3.2 of the paper.\\n\\n> - I am also not sure about this sentence: \\\"Heuristically we always want clean images in our calibration set while having different shifted means.\\\" Does it mean that you would like to have both clean images and corrupted images in the validation dataset?\\n\\nYes, precisely! Without any clean images, we would obtain poorly calibrated methods for in-distribution images. By having both, the method remains well-calibrated for in-distribution images, and its calibration for out-of-distribution images is improved.\"}",
"{\"title\": \"Open questions addressed\", \"comment\": \"> - The terminology of the paper is ill-defined. For instance, I don't think the authors ever explicitly define pmax. I understood it by context, but the presentation could be much clearer.\\n\\nWe now define pmax when is first mentioned in the introduction and recall its definition when introducing ours methods. We also renamed $P_m^{deploy}$ as $P_m^{test}$, as suggested by Reviewer 4 , to conform to the standard terminology of train/validation/test data. Moreover, we also rewrote the introduction.\\n\\n> - When is it clear to use the \\\"Multiple Image Method\\\"? How can one be sure that x1,...xm come from the same distribution?\\n\\nWe envision that the Multiple Image Method to be used when it is a priori known that $x_1,\\\\ldots,x_m$ come from the same distribution. This in fact is the assumption in the unsupervised domain adaptation where the goal is to transfer a classifier trained on a source distribution $p$ to a target distribution $q$, which is assumed to be known (see also our response to Reviewer 2). Without that assumption a statiscal test could be performed, although that falls outside of our expertise. A simple naive approach could be to randomly separate the images into two sets, construct the empirical distribution function for each and compute the Kolmogorov-Smirnov statistic.\\n\\n> - It was not explained why \\\"contrast\\\" was chosen as the calibration set.\\n\\nWe added a discussion in the Appendix of how the choice of the corruption used in the calibration set affects the results. The idea is to choose a corruption that leads to calibration sets whose accuracy slowly decreases in the presence of corrupted images at higher levels, while remaining calibrated for clean images. We found the results were of similar quality for most choices of calibration set, with some exceptions e.g Glass Blur corruption which has a comparatively strong corruption, decreasing accuracy significantly even at low intensities. \\n\\n> - How can this method be applied when the OOD corruption is very far away from \\\"contrast\\\"? Could you evaluate how your method performs on corruptions such as translations and rotations?\\n\\nWe are limited here to the data available in the benchmark (Ovadia et al (2020)), but we were able to add translation a possible corruption for CIFAR10 models, for which both the single and multi image method performs well (see Table 6 in the Appendix which shows how the method performed across the different corruptions, in particular translations). We also evaluate the confidence of CIFAR10 trained models on the entirely OOD dataset SVHN (see Figure 7). Finally, we also present results on MNIST trained evaluated on entirely OOD datasets: Fashion-MNIST and Not-MNIST (see Figure 8).\"}",
"{\"title\": \"Additional experiments added\", \"comment\": \"> (-) While the 'single image method\\u2019 is more or less intuitive, the motivation for the \\u2018multiple image method' is less clear. For example, why not compare the distribution of p_maxes in S^deploy to the distribution of p_maxes in each $p^{CAL}$? Or, why not do a similar weighted averaging as in the single image method, with divergences between these two distributions providing the weights?\\n\\nIn fact, we do compare the distribution of the pmax values of $S^{deploy}$ (in the revised version we use the notation $S^{test}$ instead per suggestion of Referee 4) but we do so based solely on the mean. We tried using the KL-divergence but obtained comparable results and therefore kept the mean for simplicity. Moreover, one can still interpret the method as a weighted average but with $q_i \\\\equiv 1$ and $q_j \\\\equiv 0$ with $j\\\\neq i$, where $i$ denotes the calibration set with pmax mean closest to the pmax mean of the test set.\\n\\n> (-) While this paper mostly builds on prior work, it would be interesting to see calibration advantages on more than just the artificially corrupted sets. The prior work [2] that this submission builds upon reports results across a fairly wide range of tasks and out-of-distribution types. It would be more interesting to see if such calibrations with a specific set can be relevant for more realistic OOD cases as well, as in [2].\\n\\nWhile the methods proposed here can be for instance extended to text categorization task described in [2], our focus in this paper is image classification. In order to test our method on more realistic OOD cases, we added results of CIFAR10 trained models evaluated on SVHN and MNIST trained models evalutated on FASHION-MNIST and NOT-MNIST. See Figures 7 and 8. Compared to [2], the improvements are significant.\\n\\n> The sentence \\\"Heuristically we always want clean images in our calibration set while having different shifted means\\u201d is not clear, could the authors elaborate? Why might we prefer such a calibration set? This might be an important point for the reader, since it motivates the particular choice of contrast-corruption.\\n\\nPrevious methods were well-calibrated on uncorrupted images, but poorly calibrated (overconfident) on corrupted images. Our contribution is to improve calibration on both uncorrupted and corrupted images. In order to do so, we need have various levels of corruption in our calibration sets. By always having clean images in our calibration set we keep the ratio of clean and corrupted images the same. Otherwise, our method would be under-confident on uncorrupted images. \\n\\n> An obvious baseline would be to perform the same recalibration procedure, but with the validation set (i.e. non-corrupt data), to figure out if corrupt sets in particular are required for calibration. I suspect they are, since performance at such sets are likely to be poorer, which would allow for more calibration room over a larger \\u201cerror\\u201d-space.\\n\\nWe added this in the Appendix. As expected, the inclusion of the corrupted images in the validation set is needed, in particular for higher levels of corruption intensity.\"}",
"{\"title\": \"Clarification on problem solved\", \"comment\": \"First, we would like to thank you for taking the time to read our paper and for the suggested references, which we have added to the related work section, together with a few more.\\n\\n> The approach proposed by the authors is fundamentally flawed: while they do not directly assume to know which \\u201cunknown distribution\\u201d the novel image is from, they assume it is from one of a small set of possibilities. This information is not assumed in existing work, and fundamentally alters the problem, making it simple to address and uninteresting.\\n\\nUpon reading the suggested references, we believe you have in mind the problem of calibration in the *unsupervised domain adaptation* setting, whereas what we are doing is *calibration under distribution shift*. Here's our understanding of each. \\nIn *unsupervised domain adaptation*, the goal is to calibrate a model, trained on the source distribution $\\\\rho_{train}$, to the target distribution $\\\\rho_{test}$, given labeled examples from the source distribution and unlabeled examples from the target distribution. An example would be to train a model on MNIST and evaluate it on SVHN. In *calibration under distribution shift*, we are looking at corrupted images from the same data set, with unknown corruptions (see Tables 5 and 6 in the revised version for all the corruptions considered). The confusion here may arise because our multi image method can also be applied to unsupervised domain adaptation (although this was not our focus). We think this is a natural mistake to make and we have therefore rewritten the introduction to make it clear which problem we are addressing. We kindly ask that you please take a second look with this distinction in mind.\\nMoreover, we should also point out that our single image method tackles the much harder problem of calibrating on a single image (without even making use of unlabeled examples of the target distribution).\\nIn the regards to the use of extra information, we don't believe that to be the case. As AnonReviewer3 explains in the first paragraph of his review our paper \\\"uses contrast-corrupted data to calibrate predictive confidences and shows improvements for the types of corruptions discussed in [1] (leaving out contrast)\\\". This is a crutial point. While the contrast corruption is used to build the surrogate calibration sets, the methods are evaluated at test time on never seen corruptions, hence why we refer to them as out-of-distribution.\\nTo further address your concerns, in the revised version of the paper we evaluate our method on completely OOD data: CIFAR-10 trained models evaluated on SVHN and MNIST trained models on Fashion-MNIST and Not-MNIST (see Figures 7 and 8). Our methods are very effective in detecting OOD data.\\nFinally, we point out that, based on the references you provided, a more compelling experiment would be to measure the performance of our methods using MNIST trained models evaluated on SVHN. However, the benchmark provided by Ovadia et al (2020) requires additional work to run such an experiment, (we just used the model outputs provided, and these are not available on SVHN for MNIST trained models). Unfortunately we do not think we will have the time to run such experiments that fall under the unsupervised domain adaptation setting during this short rebuttal period. We may be able to present some restricted experiments if you still deem it necessary and worthwhile given the different problem we propose to solve.\\n\\n> The proposed approach is also very simplistic, which is not a flaw in and of itself but is a consequence of the extra knowledge they assume. In particular, given this extra information, the authors simply predict which shifted distribution the novel example is from, and then use the calibrated prediction for that distribution.\\n\\nAgain, we cannot help to think that there may be a misinterpretation of what we are doing here. We emphasize once again that we calibrate on contrasted images, and at test time, the OOD images presented are shifted by other different transformations (e.g. gaussian blur, various forms of noise, etc.). These have not been seen by the model prior.\"}",
"{\"title\": \"Clearly flawed approach relies on extra information\", \"review\": \"This paper studies the problem of providing calibrated predictions for out-of-distribution data. They propose algorithms for both calibrating predictions given a single image from the unknown distribution as well as given multiple images from the unknown distribution. They propose an algorithm that estimates which \\u201ccalibration distribution\\u201d the novel image came from, and then use calibrated predictions for this distribution. They evaluate their approach on a standard image datasets including CIFAR-10 and ImageNet, and show that their approach outperforms existing work.\\n\\nPros\\n- Important problem\\n\\nCons\\n- The approach claims to work on out-of-distribution data, but assumes the possible novel distributions are known\\n- Missing related work\", \"the_approach_proposed_by_the_authors_is_fundamentally_flawed\": \"while they do not directly assume to know which \\u201cunknown distribution\\u201d the novel image is from, they assume it is from one of a small set of possibilities. This information is not assumed in existing work, and fundamentally alters the problem, making it simple to address and uninteresting.\\n\\nThe proposed approach is also very simplistic, which is not a flaw in and of itself but is a consequence of the extra knowledge they assume. In particular, given this extra information, the authors simply predict which shifted distribution the novel example is from, and then use the calibrated prediction for that distribution.\\n\\nIn practice, this problem is important for handling unanticipated distribution shifts in production. If the distribution shift is known and anticipated, then a much more natural approach would be to simply use data augmentation to generate data from the shifted distribution and train the model on this extra data.\\n\\nIn addition, there recent work in this area that the authors do not cite, for instance:\\n\\nPark et al., Calibrated Prediction with Covariate Shift via Unsupervised Domain Adaptation. In AISTATS 2020.\\n\\nWang et al., Transferable Calibration with Lower Bias and Variance in Domain Adaptation. In NeurIPS 2020.\\n\\n-------------------------------------------------------------------------------------------------------------------------------\", \"post_rebuttal\": \"I have updated my score based on the clarification provided by the authors. My remaining concern is that I still think the baselines considered by the authors is incomplete. In particular, the calibration under distribution shift techniques can still be applied, just using either just a single test image or their set of multiple test images. Admittedly, this approach would probably not perform well for a single image, but in Table 5, it seems like oftentimes multiple images are needed to even beat Ovadia et al. (2019).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Simple method, provides improvements to calibration\", \"review\": \"The submission proposes a very simple and seemingly effective method for improving uncertainty estimates of predictions for corrupted data. The main idea is to calibrate predictive confidences assuming access to an \\u201cexposure\\u201d set of corruptions. In particular, the paper uses contrast-corrupted data to calibrate predictive confidences and shows improvements for the types of corruptions discussed in [1] (leaving out contrast).\\n\\n(+) The improvements seem to be fairly consistent, and the method is simple and intuitive. It is interesting to know that such improvements can be had, i.e. calibration on one type of corruption is transferrable to other types to some extent.\\n\\n(-) While the 'single image method\\u2019 is more or less intuitive, the motivation for the \\u2018multiple image method' is less clear. For example, why not compare the distribution of p_maxes in S^deploy to the distribution of p_maxes in each p^CAL? Or, why not do a similar weighted averaging as in the single image method, with divergences between these two distributions providing the weights?\\n\\n(-) While this paper mostly builds on prior work, it would be interesting to see calibration advantages on more than just the artificially corrupted sets. The prior work [2] that this submission builds upon reports results across a fairly wide range of tasks and out-of-distribution types. It would be more interesting to see if such calibrations with a specific set can be relevant for more realistic OOD cases as well, as in [2].\\n\\nThe sentence \\\"Heuristically we always want clean images in our calibration set while having different shifted means\\u201d is not clear, could the authors elaborate? Why might we prefer such a calibration set? This might be an important point for the reader, since it motivates the particular choice of contrast-corruption.\\n\\nAn obvious baseline would be to perform the same recalibration procedure, but with the validation set (i.e. non-corrupt data), to figure out if corrupt sets in particular are required for calibration. I suspect they are, since performance at such sets are likely to be poorer, which would allow for more calibration room over a larger \\u201cerror\\u201d-space. \\n\\nOverall I think this paper could be interesting in that it informs us of the possibility of transferring recalibration from a corruption-exposure procedure, which to my knowledge is a novel reporting. More experiments as described above would improve the paper, by making a more compelling case for such exposure-based recalibration techniques.\\n\\n[1] Benchmarking neural network robustness to common corruptions and perturbations, Hendrycks and Dietterich\\n\\n[2] Can you trust your model\\u2019s uncertainty? Ovadia et al.\", \"post_rebuttal\": \"Thanks for the response, and the new experiments. I continue to think that this is a nice simple method that works well enough to be interesting. I retain my initial rating.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Post-hoc calibration looks promising, but many open questions remain about applications and generalization\", \"review\": \"In this work, the authors propose a post-hoc calibration method for potentially OOD data that relies on estimation of the \\\"degree\\\" of corruption for new test data. The rely on the benchmark provided in Ovadia et al. as a basis for their analysis. Ovadia et al. assessed common measurements of uncertainty such as Brier score, ECE, and entropy over a variety of datasets including MNIST + translations/rotations, CIFAR-10 and CIFAR-10C, and ImageNet and ImageNet-C for a variety of models such as vanilla neural networks, SVI, ensembles, and Dropout. By using corrupted versions of these common datasets, Ovadia et al. could evaluate how uncertainty estimates vary under dataset shift. In this work, the authors aim to improve the calibration of probabilities obtained from these models. They start by establishing a calibration set where they derive $p_{correct}$ from a sample of $p_{max}$. Then, depending on how many test images they are evaluating, they use a single image or multiple image method to attempt to determine which calibration set (of which there can be many depending on the number of corruption levels considered), the test images are closest to. Then they \\\"correct\\\" the model's probability estimate by weighting over the calibration sets. They show on CIFAR-10 and ImageNet that their method results in lower ECE over varying levels of corruptions.\", \"strengths\": [\"This method could be applied post-hoc to a variety of models\", \"Seems fast to compute\", \"Results in better calibration estimates\"], \"weaknesses\": [\"The terminology of the paper is ill-defined. For instance, I don't think the authors ever explicitly define $p_{max}$. I understood it by context, but the presentation could be much clearer.\", \"When is it clear to use the \\\"Multiple Image Method\\\"? How can one be sure that $\\\\{x_1,...x_m\\\\}$ come from the same distribution?\", \"It was not explained why \\\"contrast\\\" was chosen as the calibration set.\", \"How can this method be applied when the OOD corruption is very far away from \\\"contrast\\\"? Could you evaluate how your method performs on corruptions such as translations and rotations?\", \"Ultimately, I don't think that this method was presented in a clear enough fashion or has sufficiently demonstrated an ability to generalize to new types of corruptions and am rating this a 4 because of these reasons.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"New method for preventing OOD detection in image classification, but unclear when and why method works\", \"review\": \"Thank you for the additional experiments. Especially, Figure 7 and 8 look promising.\\nMy conclusion from the experiments is that the \\\"contrast\\\" corruption (used for validation) seems to be general enough, in the sense that for many other corruptions, encountered at test time, the performance is good.\\nHowever, as AnonReviewer1, I am not sure about why the methodology seems to work well for very different types corruptions at test time, and completely OOD data (Figure 7 and 8). \\nMore empirical/theoretical analysis would be nice.\\nIncreased rating to 6.\\n\\n------\", \"summary\": \"The paper addresses the important problem of over-confident predictions on out-of distribution (OOD) data. \\nFocusing on image classification, they propose to calibrate probabilities on a validation dataset which contains images that were artificially corrupted in various ways (noise, blur, brightness etc.). This is different from traditional post-hoc methods which use an uncorrupted validation data set (iid assumption). This way they can explicitly calibrate probabilities for the non-iid assumption.\\nTheir proposed methods (Single Image and Multiple Image Method) seem to be new, and their experiments suggests that for certain types of corruptions their method is effective (as measured by ECE and shown by calibration diagrams in Figure 2).\", \"strong_points\": [\"Even when the type of corruption in the validation dataset is different from the corruption in the test set, the proposed method can considerably improve over existing methods.\", \"The proposed method, similar to other post-hoc calibration methods, can be used in combination with any model.\", \"The illustrations, in particular Figure 2, are well done.\", \"Unclear/Weak points:\", \"The proposed method relies on set of pre-specified set of corruptions that are used to generate the validation data. Therefore, I think it is important to throughly evaluate the impact when the corruption used for validation is different from the one in the test data.\", \"However, results are only reported when the validation corruption is \\\"Contrast\\\".\", \"Furthermore, it seems the results reported are the ones on the test data averaged over several types of corruption. However, it would be interesting to see for what types of test corruption the method work/not works.\", \"It would be interesting to see results when the test data is completely OOD data, like using SVHN dataset for testing predictions of a model that was trained on CIFAR-10 (see Ovadia et al, 2019).\", \"Some intuition/analysis should be given about the proposed method.\", \"For example, in \\\"Single Image Method\\\" (i) and (ii):\", \"My understanding is that the authors want to achieve that\", \"if a test sample is corrupted by type A, then q_i(max) should be close to one for the calibration set which type of corruption is similar to type A.\", \"At least in the Appendix:\", \"Some more formal description of how Equation (3) is calculated should be given\", \"The formula for computing $L_j(p_{max}(x))$ in \\\"Single Image Method\\\" (i) should be given.\", \"The notation is slightly unclear, what does the \\\"m\\\" in $P^{CAL, j}_m$ mean?\", \"I am not sure what this means:\", \"\\\"on each of the calibration sets determined by combinations of intensity given by: {0}, {5,0}, {5,4,0}, {5,4,3,0}, {5,4,3,2,0}, and {5,4,3,2,1,0}.\\\"\", \"What are sizes of each calibration set in the experiments? How many calibration sets are there?\", \"I am also not sure about this sentence:\", \"\\\"Heuristically we always want clean images in our calibration set while having different shifted means.\\\" Does it mean that you would like to have both clean images and corrupted images in the validation dataset?\"], \"minor_points\": [\"it should probably be $L_i(p_{max}) / ...$ and not $L_j(p_{max}) / ...$ in the equation in paragraph \\\"Single Image Method\\\" (i).\", \"It might be better to rename $P^{deploy}_m$ as $P^{test}_m$, to confirm to the standard terminology of train/validation/test data.\", \"in \\\"of p_max under each of the calibration models, using the probability density\\\",\", \"I think it is easier to understand if \\\"calibration models\\\" -> \\\"calibration sets\\\".\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
_IM-AfFhna9 | Generalized Variational Continual Learning | [
"Noel Loo",
"Siddharth Swaroop",
"Richard E Turner"
] | Continual learning deals with training models on new tasks and datasets in an online fashion. One strand of research has used probabilistic regularization for continual learning, with two of the main approaches in this vein being Online Elastic Weight Consolidation (Online EWC) and Variational Continual Learning (VCL). VCL employs variational inference, which in other settings has been improved empirically by applying likelihood-tempering. We show that applying this modification to VCL recovers Online EWC as a limiting case, allowing for interpolation between the two approaches. We term the general algorithm Generalized VCL (GVCL). In order to mitigate the observed overpruning effect of VI, we take inspiration from a common multi-task architecture, neural networks with task-specific FiLM layers, and find that this addition leads to significant performance gains, specifically for variational methods. In the small-data regime, GVCL strongly outperforms existing baselines. In larger datasets, GVCL with FiLM layers outperforms or is competitive with existing baselines in terms of accuracy, whilst also providing significantly better calibration. | [
"vcl",
"gvcl",
"variational continual learning",
"baselines",
"variational continual",
"continual learning deals",
"training models",
"new tasks",
"datasets",
"online fashion"
] | Accept (Poster) | https://openreview.net/pdf?id=_IM-AfFhna9 | https://openreview.net/forum?id=_IM-AfFhna9 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"1CJ3_lw7_p",
"99ISK-NZTtS",
"cPZsnEKtGll",
"JilFwoZpdq",
"OWtvoM6mn6",
"WeUPiP0IZ4u",
"guy52WMx3W3",
"FmPNw7vhNn",
"P1GY-NN9Cup",
"gwCeps9NgkF",
"2m9VQMwSyc",
"eqdDibo5AVM",
"RscEZOMhMdA",
"1CT7Zvg20nN",
"JAminNFC9sk",
"6mbxJeIjP5u",
"T2QHQTPRtFG",
"blmc-57Q2C",
"bHD9wQgOkEX",
"n2JbrFs8a8",
"fgM2BnHqLp",
"Tmnu6w2n6QB"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040405306,
1606226291800,
1606189339776,
1606136779393,
1606136380045,
1606106608459,
1605799689211,
1605711974467,
1605649751357,
1605626993554,
1605619437314,
1605614452748,
1605614160127,
1605614054511,
1605613992691,
1605613776977,
1605613549737,
1605613454329,
1604299316786,
1603972280707,
1603617206978,
1603523460306
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3595/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3595/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3595/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3595/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3595/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3595/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3595/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3595/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3595/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3595/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3595/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3595/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3595/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3595/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3595/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3595/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3595/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3595/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3595/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3595/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3595/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"Three of four reviewers are in favour of accepting the paper. Some reviewers raised valid criticism regarding the derivations, interpretation of the mathematical analysis and experimental results. So clearly some aspects of the paper could and should be clarified in accordance with the points raised by the reviewers. However, all in all the paper contains enough contributions to warrant publication.\"}",
"{\"title\": \"Changes to the abstract and introduction\", \"comment\": \"Thank you for your response. As you suggested, we have now modified the Abstract and the Introduction in the latest version of the paper. This is to make the importance of the synergy between GVCL and FiLM layers more obvious. For example, the last paragraph of the Introduction now explicitly says that experiments are included with GVCL+FiLM layers, and we added a sentence, \\u201cIn Section 5.4 we show that FiLM layers provide a disproportionate improvement to variational methods, confirming our hypothesis in Section 3.\\u201d\"}",
"{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for the response with clarifications. So the improvement comes from both GVCL and FiLM layers, rather than the GVCL framework alone. As a result, I think it is essential to highlight this in the main text, or even in the title. Otherwise, it gives one impression that GVCL is the solution.\"}",
"{\"title\": \"New revision with clarifications\", \"comment\": \"We have now updated the paper to include the following discussion of the predictive distribution at the end of section 2.2:\\n\\nWhen performing inference with GVCL at test time, we use samples from the unmodified $q(\\\\theta)$ distribution. This means that when $\\\\beta = 1$, we recover the VCL predictive, and as $\\\\beta \\\\to 0$, the posterior collapses as described earlier, meaning that the weight samples are effectively deterministic. This is in line with the inference procedure given by Online EWC and its variants.\\nIn practice, we use values of $\\\\beta = 0.05 - 0.2$ in Section 5, meaning that some uncertainty is retained, but not all. We can increase the uncertainty at inference time by using an additional tempering step, which we describe, along with further generalizations in Appendix D.\\n\\nIf you have any suggestions, we are happy to modify the paragraph, or add more details in one of the appendices. \\nThe other changes in the latest revision are given in the latest general response.\"}",
"{\"title\": \"23/11 General Response\", \"comment\": \"In the latest revision, we made some minor changes to some figures and section 2.2, which we outline here:\\n\\n1. Added a paragraph clarifying how predictions are made with GVCL at the end of section 2.2\\n2. Updated outdated figures in appendix K which describe the easy-CHASY benchmark\\n3. Fixed an error in the newly added plots from the last revision in appendix J where the joint training lines were plotted wrong\\n4. fixed a problem with the SGD, Separate, SGD-Frozen, and Joint (MAP) training on CHASY datasets where early stopping was performed on slightly the wrong criteria. This affects table 1 and figure 2ab (in the backwards and forwards transfer metrics by ~0.2% and the plot of the joint training line). The change is very minor and does not affect our analysis or conclusions\\n\\n24/11 Small revision:\\n1. Modified the abstract and introduction so that it is more clear that the improvements are from both GVCL and FiLM layers\"}",
"{\"title\": \"Thank you for your response\", \"comment\": \"Thank you for your efforts to address my comments and to improve the paper.\\n\\n1.\\nI find the link to cold posteriors interesting, it would explain the choices more consistently from an inferential point of view. This analysis can also be deepened somewhat in future work, as the importance of 'cold' posteriors seems under-explored in the context of continual learning.\\n\\n2.\\nThank you for clarifying that. I suspected FiLM layers would also work well with VCL itself in this case, I am glad this gets confirmed, as it may inform users running VCL systems that structure akin to FiLM layers may immediately help improve their systems. It is also quite interesting that GVCL outperforms VCL when using FiLM layers. I think this is overall a valuable addition to the paper and makes the case for this 'second' idea in the manuscript more succinctly.\\n\\n3.\\nI would be presumptuous to suggest a title and I exclude this from my evaluation of the paper of course, but my mental hash function for this manuscript stored it under 'unifying VCL and EWC' rather than 'generalizing VCL' as the suggested changes are not strictly advances in the field of approximate inference. I don't know if my hash function is useful enough to the authors for considering titles that would more directly inform the reader about the content, it is anecdotal.\\n\\nOverall, I remain convinced this is a strong paper and continue to suggest acceptance.\"}",
"{\"title\": \"review adjust scores\", \"comment\": \"Thanks again for further clarifications.\\nI will read the paper again (after the revision) and update my scores.\"}",
"{\"title\": \"GVCL Posterior\", \"comment\": \"Thanks again for your response.\\n\\n1. In the next revision, we will add a discussion of how predictions are performed in Section 2.2 noting that the VCL predictive is recovered when $\\\\beta=1$ and the Online EWC predictive is recovered when $\\\\beta \\\\to 0$ as the weight uncertainty disappears. In the paper, we typically use values of $\\\\beta$ between 0.2 and 0.05, so there is still some uncertainty retained.\\n\\n2. It is not clear to us whether Ritter et al. do use weight uncertainties when making predictions e.g. in section 2, paragraph 1 (https://papers.nips.cc/paper/2018/file/f31b20466ae89669f9741e047487eb37-Paper.pdf),\\nthe paper says their aim is to find a MAP estimate to the posterior over all datasets. This would imply that they do not predict with uncertainty for *all* of the algorithms, including e.g. the one called \\\"Online Laplace\\\", even though this might be confusing. Furthermore, they describe EWC as \\u201capproximat[ing] the posterior .. with a Gaussian\\u201d (section 3 paragraph 2), but EWC also does not use weight uncertainty either, casting doubt on whether they use uncertainty too. Moreover, the only (non-official) implementation we have found also only uses deterministic weights (https://github.com/hannakb/KFA).\\n\\nIndeed, generally, if you use Laplace's approximation for Bayesian neural networks with Monte Carlo sampling for forming the predictive, you get very poor results (see e.g. https://arxiv.org/abs/1906.11537). So you have to either temper the posterior by a large amount (e.g. by removing all uncertainty) or use the linearisation approximation discussed in the above paper.\"}",
"{\"title\": \"posterior approximation\", \"comment\": \"Thanks for the clarifications!\\n\\nLaplace propagation and Online Structured Laplace Approximations (Ritter et. al) do compute the Hessians though. \\nI understand now that you do not need to compute the Hessians for learning continually because of the cancellation. However for the posterior predictive distribution, computing the \\\"right\\\" covariance would be necessary. \\nI would have expected that the variational resulting distribution from optimizing the ELBO is a posterior approximation, which is the case for VCL and for Laplace's approximation, but for GVCL this is arguably not the case.\\nI think this should be discussed. \\n\\nI suppose, we could do a Laplace's approximation in GVCL as well, if we compute the Hessian at mu, ignoring the learnt variance (which does not correspond to either variational or Laplace approximation).\"}",
"{\"title\": \"Clarification of misunderstanding with Online-EWC\", \"comment\": \"Thank you for your quick response, and for your time.\\n\\nYou are correct when you state that the resulting posterior is not identical to that recovered using Laplace\\u2019s approximation: the means are the same, but the covariance is near-zero. Therefore the approximate posterior over the weights has no uncertainty. This is in fact exactly how Online EWC performs predictions (Schwarz et al., 2018, \\u201cProgress & Compress: A scalable framework for continual learning\\u201d, Section 4). That is, predictions at test-time are made with deterministic weights (no uncertainty), and the Hessian / the link to Laplace\\u2019s approximation is only used for updating this mean parameter value.\\n\\nIf Laplace\\u2019s approximation is used to form a non-deterministic posterior which is used to make predictions, then that is a slightly different algorithm to Online EWC, which we agree does not fall under the GVCL family.\"}",
"{\"title\": \"Thanks for clarification, one more potential misunderstanding to be clarified\", \"comment\": \"Thanks for the clarification. Fig. 8 now makes sense to me.\\n\\nRegarding 2), I am still trying to understand whether or not the variance resulting from optimizing the beta-ELBO is the same as the variance you would obtain when computing the Hessian at mu_t. \\n\\nLets start again with i) beta=0 (exactly zero) and then again ii) beta --> 0 (close to zero).\\nIn i), it is clear that we will always obtain just the MLE estimate (not even MAP) with 0 covariance in every step, as we will always completely ignore the KL term. Now I don't see how for ii) the covariance suddenly jumps to the correct covariance for an infinitesimal change. \\n\\nLets try to get there through your results in App. C.\\nStep 1) dL/dSigma = 0: We obtain an optimal (local minimum with derivative zero) precision matrix in (9) under the condition that beta is close to zero. I also agree with the resulting recursion of the precision.\", \"note\": \"So far, we have not concluded whether or not this optimal precision matrix is the same as we would get from Laplace's approximation. It is just whatever we get from optimizing the beta-ELBO, and in case of beta=0, precision will be inf.\\nStep 2) Eq. (10) now looks at the ELBO when we use the previous posterior resulting from optimizing the beta-ELBO.\\nNow, I agree that if beta is only *close* to zero, it will cancel (for the Hessian). \\nAnd I also agree that the beta-ELBO then looks like the optim. from Laplace propagation. \\nBut that is only for optimizing mu! \\nSo I agree also that each iteration (time-step) t should in theory find the same mu_t as we would for Laplace Propagation. \\nHowever, my understanding is that we will have a too sharp posterior at t-1 and then because we down-weight the KL term through beta, it will act as if it had the correct variance. So in the limiting case, we have a dirac distribution and it is weighted with zero, canceling completely. \\n\\nIn other words, the resulting posterior approximation at every iteration is not identical to Laplace's approximation, but the mean is. This means that posterior predictives will not use the right posterior (although we could apply Laplace's approximation just for posterior predictives while doing continual learning as proposed).\\n\\nDo you think there is still a misunderstanding or would you agree?\"}",
"{\"title\": \"Reviewer 3 Response\", \"comment\": \"Many thanks for your comments. They have helped us improve the paper. We respond to your questions below.\\n\\n1. It is kind of weird to start from the Bayesian framework and then go back to the non-Bayesian perspective.\\n\\nWe did not intend to claim that our approach is entirely Bayesian, and our resulting algorithm is not. Rather, we considered this work to be Bayesian inspired: we looked at the Bayesian approach (VCL) and made careful adjustments to fix key issues affecting it. Please also see the general response, point 2 for more discussion of these points.\\n\\nIn the latest version of the paper, we re-interpreted $\\\\lambda$ so that it falls into the general theme of the paper as tempering different parts of the ELBO in light of the cold-posterior effect. Note that tempering is not Bayesian, but rather a commonly-used workaround for dealing with the poor performance of Bayesian methods.\\n\\n2. Moreover, as described in Sec. 2.3, the resultant GVCL when $\\\\beta \\\\to 0$ is actually different from the previous online EWC algorithm\\n\\nWe assume you are referring to the difference in $\\\\tilde{\\\\lambda}$ and $\\\\lambda$. These two approaches differ only in their treatment of the prior variance. In the original EWC paper, and the paper proposing Online-EWC, it is unclear what the treatment of the prior covariance should be, hence this difference arises due to an ambiguity in the original algorithm. Furthermore, note that with increasingly small $\\\\beta$, the difference between $\\\\tilde{\\\\lambda}$ and $\\\\lambda$ vanishes, as the prior term becomes negligible compared to the Hessian.\\n\\nWe have changed the text to make this clearer in Section 2.3.\\n\\n3. GVCL ... should perform at least the same as VCL and online EWC.\\n\\nAs we mentioned previously, GVCL performs worse than Online EWC sometimes due to optimization issues with convergence as $\\\\beta \\\\to 0$. However, note that GVCL-F always outperforms VCL-F and EWC-F as expected. We also note that there is a benefit to having a unifying framework that encompasses a range of existing approaches allowing them to be better understood.\\n\\nWe have now added a toy example of GVCL in a toy 2d regression dataset showing the convergent behaviour of GVCL to Online EWC. In this toy example, it takes 10 times longer to achieve convergence for very small values of $\\\\beta$ (1e-4) compared to $\\\\beta = 1$. Note that these values of $\\\\beta$ are smaller than we had in our neural network examples, and given that the toy example has only 3 parameters yet still took a very long time to optimize, it is likely that we cannot practically reach the Online EWC limit on a neural network of even modest size. \\n\\n4. Regarding the results of the GVCL and GVCL-F, it seems that the improvement mainly comes from the FiLM layers\\n\\nThis is not a correct interpretation of the results -- the improvement comes from both GVCL and FiLM layers, with significant contributions coming from both. We show this explicitly in the new Section 5.4, which as you suggested, looks at the performance gains from adding FiLM layers to GVCL, VCL and Online EWC. It shows for example:\", \"split_cifar\": \"moving from VCL to GVCL is 25% gain, adding FiLM layers is a further 11% gain\", \"mixed_vision\": \"moving from VCL to GVCL is 24% gain, adding FiLM layers is a further 30% gain\\n\\nAlso note adding FiLM layers to Online EWC only gives 0.1% and 7.7% improvement on these datasets respectively. See Table 2 and General Response point 3 for additional information.\\n\\nSo we see that FiLM layers alone provide little benefit to Online EWC, and a key insight in this paper is how FiLM layers interact with variational methods to fix the pruning issue. Unlike HAT, because of the interaction with the prior, we do not need more complex training procedures to learn the FiLM parameters. Note that we did not include HAT + FiLM layers since it already has per-task channel-wise gating layers.\"}",
"{\"title\": \"Reviewer 1 Response\", \"comment\": \"Many thanks for your comments. They have helped us improve the paper. We respond to your questions below.\\n\\n1. It would be interesting to have VCL and Online EWC added to Figures 2 and 3.\\n\\nWe purposefully included only the best algorithms in Figures 2 and 3 so that the y-axis covers an appropriate range. VCL and Online EWC perform relatively poorly, so we left them out so that it is easy to compare the performance of the better algorithms. We have added additional figures to Appendix K which show these plots, along with the performance of additional algorithms.\\n\\n2. Why is GVCL significantly worse than baselines for split-mnist (Figure 2c)?\\n\\nThe baselines in Figure 2c are potentially confusing, and it is unfair to compare vanilla GVCL to them. HAT and GVCL-F store task-specific parameters, and hence have growing memory demands with number of tasks. Therefore the natural comparison for HAT is GVCL-F, and not GVCL (note that Joint-MAP is not a continual learning algorithm). It appears that this is much more important for Split-MNIST than the CHASY benchmarks.\\n\\n3. Why is split-mnist omitted from Figure 3?\\n\\nMany of the tasks all effectively are at 100% performance, so including it does not provide much useful information. However, we have included the plot in Appendix K.\\n\\n4. The supplementary material contains some analysis on the effect and sensitivity of the value of $\\\\beta$ on the performance of the algorithm. This should be extended and presented in the main paper.\\n\\nWe have added more explanation in the main text (end of Section 2.2). We have also now added a small toy example showing the convergent behaviour of GVCL to Online EWC in Appendix B, which includes a range of $\\\\beta$ values.\"}",
"{\"title\": \"Reviewer 2 Response Part 2\", \"comment\": \"5. Figure 8 in the supplementary material probably has some legends mixed up, or the explanations that small beta values cause locally measured locally are wrong\\n\\nThe legends are correct, and there is likely some misunderstanding here. We clarify that by \\u201clocal\\u201d we refer to the immediate vicinity around a point, i.e. if we zoomed in very close to a point. We also reiterate that a \\u201cgood\\u201d and a \\u201clocal\\u201d approximation are not to be conflated, particularly in the toy examples presented.\\n\\nFor Figures 8b and 8c, there is a cusp at the mode of the distribution. Therefore, a local approximation would approximate the function immediately surrounding that cusp, and ignore the regions far away. We see that this is exactly what $\\\\beta = 0.1$ (very small $\\\\beta$) does: it has a very sharp curve. Similarly, for Figure 1, if zoomed in very close to the mode of (a), we would notice that the true distribution is nearly flat. This means that a local fit would match that flatness, and ignore the fact that it begins to curve further away from the mode. $\\\\beta = 0.1$ does exactly this. It appears to be a bad fit because of the scale of our graphs, but if we zoomed in very closely to the mode, it would appear to be the best fit, because it is the most local fit.\"}",
"{\"title\": \"Reviewer 2 Response\", \"comment\": \"Many thanks for your comments. They have helped us improve the paper. We respond to your questions below.\\n\\n1. The two contributions seem quite orthogonal to each other and each of them is rather minor in novelty.\\n\\nWe disagree that these contributions are minor. In this paper, we show that several continual learning algorithms all arise from a single unifying framework, and certain choices and hyperparameters in each algorithm all arise by making different choices in the tempering of posteriors, priors, and likelihoods. For example:\\n\\n* VCL occurs when no tempering is performed\\n* Online-EWC, Online-Structured Laplace and SOLA are instances of GVCL where $\\\\beta \\\\to 0$, and the choice of $Q$ distribution is changed\\n* $\\\\lambda$ arises by tempering the posterior and prior using the same temperature in the KL-divergence\\n* Online-EWC\\u2019s $\\\\gamma$ arises by tempering the posterior and prior using different temperatures in the KL-divergence\\n\\nA unifying algorithm immediately opens the door for different choices of these parameters, and lets us understand the relationship between and broader context of these algorithms. Naturally, it means that improvements and innovations in one of these algorithms can readily be applied to others and paves the way for rapid and systematic progress.\\n\\nOur second contribution, the usage of FiLM layers, addresses a key limitation in variational methods, which is particularly problematic in the continual learning setting. While this contribution is orthogonal to GVCL, it is particularly synergistic. What differs between our version of FiLM layers and other similar algorithms, such as HAT, is its synergy with variational methods. Because of the pruning effect and the prior, no special algorithm is needed to fit these FiLM layers, and the resulting gain for variational methods is over 10%, compared to merely 2% for non-variational methods. We have added a new section in the revised version of the paper to make this clear (Section 5.4).\\n\\n2. I am not sure if the authors a) compute Laplace\\u2019s approximation in the end, at the resulting mean of q, for any beta value?\\n\\nThis is a misunderstanding. We have responded to this point in the general response (point 1), but add further clarification below.\\n\\nTo be clear, we never compute Laplace\\u2019s approximation directly. We update $\\\\Sigma$ using the $\\\\beta$-ELBO and show that this recovers a version of Laplace\\u2019s approximation in a limiting case. In the derivation of this result, we did not assume $\\\\beta = 0$, but rather assume that $\\\\beta$ is very close to zero as we take the limit. When this is done, there is a cancellation in the $\\\\beta$-ELBO whereby the beta-dependence in the previous posterior cancels with the beta term in the $\\\\beta$-ELBO, giving rise to the EWC regularisation (see equations 10 and 11 in Appendix C).\\n\\nIn our derivation, we did not assume the covariance was zero, but we assumed it was $\\\\textit{near}$-zero, so we can still apply normal arithmetic to it. \\n\\nWe agree that in Appendix C, the statement that \\u201cq approaches a delta function\\u201d was rather imprecise. To amend this, we have now added a proof that moving from Equation 8 to 9 is valid for small $\\\\beta$. This is included in Appendix C.1.\\n\\n3. Related work: The related work section is rather short mentioning only very few related approaches. More effort is required here.\\n\\nNote that many related works are mentioned previously in the text (e.g. 16 unique texts in sections 2 and 3), not only in the Related Works section. Also, we have a longer related work section in Appendix I, which we were unable to include in the main text due to space constraints. As we now have additional space (from gaining an additional page), we are happy to expand this section in the main text. Please let us know of the specific references you have in mind.\\n\\n4. I am wondering why e.g. Fig. 2 does not include VCL and EWC\\n\\nWe have included the requested plots in Appendix J. We only included the best performing algorithms in figure 2 since VCL and EWC performance lies well below the range of the graph.\"}",
"{\"title\": \"Reviewer 4 Response\", \"comment\": \"Many thanks for your comments. They have helped us improve the paper. We respond to your questions below.\\n\\n1. I found the introduction of the reweighting terms in Sec. 2.3 to be ad hoc and not particularly well justified...I think the authors should dig deeper here for better justifications for such choices\\u2026\\n\\nWe agree that the writing in Section 2.3 could be improved and have re-written this section connecting the introduction of the parameter $\\\\lambda$ to the literature on cold posteriors. Please also see the general response, point 2 for more information.\\n\\n2. \\u201dAdditionally, the film layers work great, but I maybe missed if they are the main attraction powering performance or if it is the combination with the new ELBO. Would film layers with VCL do equally well?\\u201d\\n\\nWe have added Section 5.4, which shows the relative performance gain from adding FiLM layers to VCL and Online EWC. We see that GVCL and VCL both see large performance gains while Online EWC only receives marginal gains. This suggests that FiLM layers are particularly synergistic with VI based methods. Additionally, GVCL+FiLM outperforms VCL+FiLM.\\n\\n3. The title is somewhat misleading\\n\\nWe selected the title as the paper generalizes several existing continual learning algorithms under a single variational framework, related by the choice of Q distribution class and choices related to the tempering of distributions. We are happy to consider alternative titles and would be open to hear your suggestions.\"}",
"{\"title\": \"General Response Part 2\", \"comment\": \"Here we note the changes we have made to the revised version of the paper:\\n\\n1. A derivation of the quadratic term multiplier based on tempering the posterior and prior in Section 2.3.\\n2. Section 5.4 (and Table 2), which includes performance gains from adding FiLM layers to EWC and VCL.\\n3. Appendix B, which empirically shows the convergence of GVCL to Online EWC in a toy example and highlights the difficulty of 4. achieving this limit in practice. \\n5. Appendix C.1, which includes a proof that the delta-function argument used in Equation 8 to 9 is valid. \\n6. Appendix C.3, which shows how Online-EWC\\u2019s $\\\\gamma$ arises by tempering the posterior and prior in the KL-divergence by slightly different temperatures\\n7. Additional results of EWC + FiLM and VCL + FiLM added to the full result tables in Appendix J, as well as additional figures showing each algorithm\\u2019s performance on each task (similar to Figures 2 and 3).\"}",
"{\"title\": \"General Response\", \"comment\": \"We thank all the reviews for their thoughtful criticisms and suggestions. Here, we address the main points raised by reviewers and outline the changes we have made to the paper in response. More specific points are addressed in the response to each of the individual reviewers.\\n\\n\\n1. Reviewer 2 has concerns about the mathematical limit that connects generalized variational inference to Laplace\\u2019s approximation \\n\\nThe reviewer has misunderstood the theory in our paper associated with the limit $\\\\beta \\\\to 0$. We briefly outline the key result: \\n\\n* Consider running GVCL with a common $\\\\beta$ value used across all tasks\\n* Now take the limit of this procedure as $\\\\beta$ tends to zero\\n* In this case, all the approximate posteriors (qs) limit to deltas around the MAP estimate\\n* The inverse variances (precisions) of these approximate posteriors tend to sums of the Hessians at the MAP value scaled by 1/$\\\\beta$\\n*Critically, the objective functions for each task become equal to online EWC due to a cancellation of the terms involving beta \\n\\nWe have improved the discussion of this limit by adding more detail to Appendix C and C.1, in particular we justify the argument that for small $\\\\beta$ the expectation in Equation 8 becomes approximately the Hessian at the mean (Equation 9).\\n\\n\\n2. Reviewers 3 and 4 are worried that the introduction of $\\\\lambda$ -- which is necessary to recover Online EWC in a general way -- is not theoretically-well justified\\n\\nIt is true that, from a Bayesian perspective, it is not straightforward to justify the introduction of the parameter $\\\\lambda$. In the revised version of the paper, we add a theoretical explanation for reweighting the quadratic term with $\\\\lambda$ as well (Section 2.3).\\n\\nIn the re-written Section 2.3, we show that $\\\\lambda$ arises if we make use of tempering, as has been proposed in the context of cold posteriors (Wenzel et al. 2020). Specifically, at each step we temper the previous posterior before applying variational inference. We believe that this new interpretation sheds light on the relationship between the effectiveness of $\\\\lambda$ and cold posteriors.\\n\\nWe would also like to reiterate that our final algorithm cannot be strictly considered \\u201cBayesian,\\u201d nor do we claim it to be. Rather, there is a general trend in the Bayesian Deep Learning community whereby Bayesian methods are used to develop new algorithmic approaches to deep learning problems and then relaxations of these approaches are considered, with additional parameters, that perform better empirically than the pure-Bayesian method. EWC was developed using this approach (and indeed this resulted in the same $\\\\lambda$ parameter being introduced without rigorous justification) and more recent work has followed this example (Kirkpatrick et al. 2016, Ritter et al. 2018, Osawa et al. 2019 , Pan et al. 2020, Wenzel et al. 2020, Higgins et al 2017, Alemi et al. 2017). This class of approaches has been called \\u2018Bayesian Inspired\\u2019 and we see the current work as belonging to this pragmatic vein.\\n\\nIn this paper we take the more strictly Bayesian VCL algorithm, and improve it by addressing the main shortcomings of variational Bayesian methods. Namely, we address the poor data fit problem by considering tempered likelihoods with $\\\\beta$ and $\\\\lambda$, and fix the pruning issue using FiLM layers.\\n\\n\\n3. Reviewers 3 and 4 question \\u201cwhether the performance gains are from FiLM Layers or GVCL\\u201d\\n\\nWe have added results on all benchmarks with Online EWC + FiLM layers, and VCL + FiLM layers in section 5.4. These results show that (i) FiLM layers provide a significant benefit to the variational algorithms (VCL and GVCL), while not so much for Online EWC, (ii) GVCL + FiLM outperforms all competing algorithms, with both innovations contributing to the improved performance.\\n\\nTo summarise, in the revised Section 5.4 in Table 2 and Appendix J we see:\\n* VCL+FiLM >> VCL and GVCL+FiLM >> GVCL, while EWC + FiLM $\\\\approx$ EWC.\\n* GVCL+FiLM > GVCL > VCL+FiLM >> VCL \\n\\n\\n4. AnonReviewers 1 and 2 have some questions about the omission of certain baseline algorithms from figures\\n\\nFor Figures 2 and 3, we only included the GVCL, GVCL-F, and the top performing baseline algorithm. This was done to keep the figure uncluttered, and to keep the y-axis in a reasonable range as these baseline algorithms perform poorly. However, as requested by reviewers, we have now added some extra figures in Appendix J.\"}",
"{\"title\": \"Interesting take on Unifying VCL and EWC\", \"review\": \"The authors propose Generalized VCL in this paper, which consists of multiple ideas: first, the authors introduce a beta-Elbo, which facilitates downweighting the KL-term of VCL. If beta taken to the limit towards zero, the authors show that the beta-elbo recovers the online EWC learning criterion, which draws an interesting link between VCL and EWC.\\nThe authors also discuss reweighting terms to introduce a parameter lambda as in EWC, which they incorporate via a lambda-kl divergence term.\\nFinally, furnished with this learning objective that interpolates between VCL and EWC, the authors propose to combine the learning objective with the architectural choice of Film layers, which they show facilitate overcoming the pruning behavior that their method inherits from VCL by offering ways to prune nodes without injecting noise into the network.\\n\\nExperiments are broad on multiple interesting datasets and quite clearly show that their proposed combined model performs best.\", \"positives\": \"The paper draws an interesting unification between EWC and VCL, and in fact also other related works, as subtle modifications in a regularizer. This by itself is an interesting contribution. The fact that the authors study the interplay of their learning arlgorithm with architectural biases, i.e. overcoming early pruning via film layers, is also a valuable idea that I find not just interesting in itself, but also stylistically valuable as an approach to studying deep learning. While the Film layers per se also appear somewhat ad hoc, their empirical benefits -particuarly when paired with the lambda-elbo, are impressive and well put together.\", \"criticisms\": \"While I really enjoy the derivation of the beta-elbo in the zero limit, I found the introduction of the reweighting terms in Sec. 2.3 to be ad hoc and not particularly well justified. It feels as if it is reverse engineered to match the desired criterion from EWC. I think the authors should dig deeper here for better justifications for such choices, as they did a good job having a mathematically interesting framework to derive earlier.\\n\\nAdditionally, the film layers work great, but I maybe missed if they are the main attraction powering performance or if it is the combination with the new ELBO. Would film layers with VCL do equally well? This is empirically confusing, it would be great to get some more help to understand the relative merits of each components here and clarify more how these pieces fit together empirically. I do enjoy the appendix discussing this qualitatively, but I would like to understand it quantitatively better, as theoretically film layers plus VCL (without this paper's innovations) should also benefit similarly.\\n\\nOne additional criticism is that the title is somewhat misleading, as it does not generalize VCL to broader settings, but rather collapses it towards the limit beta towards zero. The title raised hopes for a richer variational treatment rather than a unification to EWC and an architecture change. The authors might want to consider tweaking the title to sth that is closer to the paper's actual contributions.\", \"overall\": \"This paper takes an interesting approach towards adding to the EWC and VCL literature by unifying them and offering an architectural fix for a key problem in these scenarios. While the contributions are mixed and not consistently derived from clear modeling assumptions, their interplay is well studied and highly relevant to the understanding and improvement of practical continual learning. I also want to again applaud the authors for studying and explaining the interplay of pruning and film layers, I enjoyed reading the supplementary information on this. I wish more papers that discover methods that perform well empirically would study the interplays of algorithm and architecture similarly to expose interesting effects.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #2\", \"review\": \"This work considers online variational Bayesian approaches to continual learning. The authors propose a beta-ELBO objective which they claim interpolates between Gaussian variational inference (beta = 1) and Laplace\\u2019s approximation (beta = 0).\\nFurthermore, the authors propose task-specific, non-probabilistic (point estimation) FiLM layers that apply an element-wise transformation to the activations.\\n\\nTheory / Contribution:\\nThe two contributions seem quite orthogonal to each other and each of them is rather minor in novelty. \\nIt is obvious that using beta=0 leads to MAP estimates from which Laplace\\u2019s approximation can be computed. However, I am quite confused what exactly the authors do here and there could be a major mistake:\\nFrom the paper, I am not sure if the authors a) compute Laplace\\u2019s approximation in the end, at the resulting mean of q, for any beta value? As far as I understand, the authors instead b) only optimise the variance through the beta-ELBO. \\nHowever, in this case, the resulting approximation would *not* identical to Laplace\\u2019s approximation!\\nI need clarification what the authors are doing here.\\nConsider the case of beta=0, the covariance will be the dirac distribution as the authors note in Sec. 2.2 or the supplementary material. The authors then go on and write the optimal covariance matrix for which the derivative of the beta-ELBO is zero.\\nYou have first postulated that the covariance is zero, in order to be able to pull out the expectation, and then you again allow for a non-zero beta-elbo-minimizing covariance. This would be a contraction. This makes me guess you do compute Laplace\\u2019s approximation instead. But then it is not discussed how you deal with beta>0.\", \"related_work\": \"The related work section is rather short mentioning only very few related approaches. More effort is required here.\", \"experiments\": \"The experimental evaluation is thorough and seems promising. Although I am wondering why e.g. Fig. 2 does not include VCL and EWC. Figure 8 in the supplementary material probably has some legends mixed up, or the explanations that small beta values cause locally measured locally are wrong? For Fig. a), the largest beta=10 seems to be a good approximation and also the most local. In case of Fig. B) and C) it is unclear / subjective (from visually inspecting the likelihood function) which is the best approximation. In A), beta=0.1 is the least local approximation, in B) beta=10 and in C) beta=1. I cannot follow the intuition provided here.\", \"summary\": \"I am sceptical about the correctness regarding the equivalence between VI and Laplace\\u2019s approximation; the exact approach proposed in the paper is unclear and may be based on a contradiction. In case I have a misunderstanding here, I hope the authors will point this out and update the manuscript.\", \"update_after_rebuttal\": \"The authors provided clarifications and improved the manuscript. \\nIn particular, the authors now detail the two special cases (beta=0, beta=1) and how it relates to EWC and VCL. \\nI am no longer sceptical that the claims regarding the equivalence to EWC in case of beta=0 is correct. \\nBased on this, I changed my evaluation and now suggest acceptance.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"GVCL\", \"review\": \"This paper proposes Generalized Variational Continual Learning (GVCL). It is shown that Online EWC and VCL are special cases of GVCL, along with other theoretical contributions. Further, GVCL is augmented with FiLM to alleviate weaknesses of VCL and GVCL. GVCL and GVCL-F are applied to a number of continual learning tasks and demonstrate competitive performance.\\n\\nAlthough GVCL and GVCL-F do not outperform baselines, particularly in hard settings (split-mnist and mixed vision), GVCL is an original and excellent contribution. The paper is clear and well-written, the proposed algorithm is theoretically motivated and analysed, experiments are comprehensive, demonstrating the empirical performance of GVCL.\", \"i_have_the_following_comments\": [\"It would be interesting to have VCL and Online EWC added to Figures 2 and 3.\", \"Why is GVFL significantly worse than baselines for split-mnist (Figure 2c)?\", \"Why is split-mnist omitted from Figure 3?\", \"The supplementary material contains some analysis on the effect and sensitivity of the value of $\\\\beta$ on the performance of the algorithm. This should be extended and presented in the main paper.\"], \"minor\": [\"\\\"the node is *effective* shut off\\\" -> effectively\"], \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"An interesting perspective but lack of preciseness and kind of unclear.\", \"review\": \"This paper proposed a generalized variational continual learning (GVCL) framework using the \\\\beta - ELBO, and then combined with FiLM layers. The idea is interesting but there is a lack of preciseness. The pros and cons are as follows.\", \"pros\": \"1. The proposed GVCL proposed a different and interesting perspective on the online EWC, viewed as a special case of \\\\beta \\\\to 0;\\n2. FiLM layers are introduced to combine with GVCL, which lead to significant improvement in the performance;\\n3. Various experiments are performed, showing some level of advantages.\", \"cons\": \"1. The new perspective that online EWC could be viewed as a special case of the GVCL framework is lacking preciseness. First of all, as described in Sec. 2.3, the result of the \\\\beta-ELBO, even with \\\\beta \\\\to 0, does not lead to the key hyper parameter \\\\lambda in online EWC. To compensate this, the authors introduce a modified KL divergence to make them similar. However, it is not justified, from a unified Bayesian or some other theoretical perspective , why the previous \\\\beta-ELBO needs to be modified. It is kind of wired to start from the Bayesian framework and then go back to the non-Bayesian perspective to design a Bayesian algorithm to improve the performance, and then claim that the previous non-Bayesian algorithm is a special case of the unified Bayesian framework. Moreover, as described in Sec. 2.3, the resultant GVCL when \\\\beta \\\\to 0 is actually different from the previous online EWC algorithm. As a result, strictly speaking, it is not approperiate to claim that the online EWC could be recovered as a limiting case. \\n\\n2. If it is true that the proposed GVCL is a generalization of VCL and Online EWC, which allows interpolation between the two, then it is expected and reasonable that the GVCL alone (without additional FiLM layers) should perform at least the same as VCL and online EWC. Otherwise, the statement is not true and there is no advantage of the proposed GVCL framework . However, as shown in experimental results, e.g., Table 1, GVCL alone performs worse than Online EWC in large datasets, which is really wired. The authors also acknowledged this point and claimed that this is due to the difficulty in optimizing GVCL with small \\\\beta. It would be better to make such statement more precise because this is really important point for this paper. Otherwise, it implies that the so-called interpolation between VCL and online EWC has no additional advantage. \\n\\n3. Regarding the results of the GVCL and GVCL-F, it seems that the improvement mainly comes from the FiLM layers, rather than the GVCL framework itself. To make this more clear and for a more fair comparison, it is highly suggested to compare other methods (online EWC, VCL, HAT, etc) with FiLM layers. Otherwise, the current improvement of the performance is unclear. In addition, the improvement of GVCL-F over the baseline is not consistent.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
U7-FJu0iE3t | Succinct Explanations with Cascading Decision Trees | [
"JIALU ZHANG",
"Mark Santolucito",
"Ruzica Piskac"
] | Classic decision tree learning is a binary classification algorithm that constructs models with first-class transparency - every classification has a directly derivable explanation. However, learning decision trees on modern datasets generates large trees, which in turn generate decision paths of excessive depth, obscuring the explanation of classifications. To improve the comprehensibility of classifications, we propose a new decision tree model that we call Cascading Decision Trees. Cascading Decision Trees shorten the size of explanations of classifications, without sacrificing model performance overall. Our key insight is to separate the notion of a decision path and an explanation path. Utilizing this insight, instead of having one monolithic decision tree, we build several smaller decision subtrees and cascade them in sequence. Our cascading decision subtrees are designed to specifically target explanations for positive classifications. This way each subtree identifies the smallest set of features that can classify as many positive samples as possible, without misclassifying any negative samples. Applying cascading decision trees to new samples results in a significantly shorter and succinct explanation, if one of the subtrees detects a positive classification. In that case, we immediately stop and report the decision path of only the current subtree to the user as an explanation for the classification. We evaluate our algorithm on standard datasets, as well as new real-world applications and find that our model shortens the explanation depth by over 40.8\% for positive classifications compared to the classic decision tree model.
| [
"Decision Trees",
"Explainability",
"Interpretability"
] | Reject | https://openreview.net/pdf?id=U7-FJu0iE3t | https://openreview.net/forum?id=U7-FJu0iE3t | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"UccjW6ik1Tf",
"FJ4VuvkCcfj",
"O98YplCR3ne",
"O8AqMxqxZcm",
"kivlhk3kuY"
],
"note_type": [
"decision",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040513031,
1604116260331,
1603876823335,
1603826747444,
1603418491583
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3593/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3593/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3593/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3593/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper introduces the idea of cascading decision trees. The reviewers agree that this is a potentially novel and valuable idea, but they also agree that the paper fall short in execution. The paper would be substantially strengthened with more theoretical analysis, more discussion of why cascading decision trees are useful, and most importantly substantially more empirical evaluation, especially with more data sets and more baselines for comparison.\"}",
"{\"title\": \"Recommending Rejection (Promising Work but Underdeveloped)\", \"review\": \"# Summary\\n\\nThis paper introduces a new type of classification model called the \\\"cascading decision tree.\\\" The cascading decision tree is a rule-based classifier designed to have an overlapping hierarchical structure between its nodes to produce succinct explanations. The paper introduces these models, presents an induction algorithm to learn them from data, and includes an empirical evaluation on three UCI datasets as well as a propietary dataset. The submission includes code.\\n\\n# Pros\\n\\n1. The paper introduces a new kind of classification model. This model form is a contribution in and of itself (i.e., regardless of the algorithm used to fit cascading decision trees from data). \\n\\n2. The paper highlights an innovative approach to the design of machine learning models \\u2013 i.e., training models that are constrained to have particular \\\"explainability\\\" properties.\\n\\n# Cons\\n\\n3. The cascading trees produced by the algorithm in this work have little to no formal guarantees regarding their optimality or generalization produces. It is unclear if this is the best way to learn cascading decision trees.\\n\\n4. The paper does not provide a pruning routine. The authors suggest that the algorithm can be paired with any generic pruning method. However, the empirical results do not showcase how the trees perform after pruning. \\n\\n5. The experimental section is lacking in multiple ways. Ideally, this section should include comparisons on more than three datasets, and consider other baseline models such as \\\"rule lists\\\" (i.e., a special kind of decision tree) and sparse linear models (i.e., a type of model that does not require explanations). Finally, I would recommend the authors to include a plot that shows the distribution of explanation depths for all the examples in a dataset. This would allow readers to have a far better understanding of how each method affects the explanation depth (as compared to a comparison of the means).\\n\\n6. The paper does not make a strong case to motivate why \\\"shorter explanations are better.\\\" This is unfortunate given that succinct explanations are the primary motivation for using cascading decision trees. At a minimum, the paper should include a clear demonstration the advantages of using succinct explanations in a modern application. Ideally, this would include: (i) comparisons of the explanations produced for the same point by competing methods; (ii) a study of how the properties of explanations change based on other relevant phenomena (e.g., explanation depth for seen/unseen points).\\n\\n# Rating\\n\\nOverall, I was convinced that \\\"cascading decision trees\\\" were a valuable model class. I was also convinced that this work was valuable in that it highlights a novel approach for supervised learning (i.e., training models with explicit constraints on explainability such as in https://arxiv.org/abs/1703.03717)\\n\\nMy current rating (3) is based on the fact that the submission fails to analyze, validate, or motivate cascading decision trees sufficiently. Ideally, the paper should include a thorough analysis of the tree induction algorithm (as discussed in 3 and 4) as is standard in other work on decision trees. It should include more robust evidence that the proposed method produces succinct explanations (as discussed in 5), as well as a convincing demonstration of the utility of succinct explanations in modern applications (as discussed in 6).\\n\\n# Questions\\n\\nQ1. How does one measure the \\\"quality of a cascading decision tree\\\"? Is it only in terms of \\\"explanation depth?\\\"\\n\\nQ2. Did you use a pruning routine in your experiments?\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting idea with a too limited empirical evaluation\", \"review\": \"*Summary*\\nThis paper introduces the Cascading Decision Tree, a novel variant of decision trees with permits to extract short explanations for a class of interest. The idea is to realize a cascade of small decision trees: at a certain level, the tree is built using all points except the positive ones correctly classified by trees in previous levels. The method has been tested using three standard datasets and a novel application.\\n\\n*Positive points*\\n-> The motivation of this paper is definitely interesting. Actually, interpretability represents one of the key features of decision trees: deeper trees, which may be needed for solving complex classification tasks, may loose this property.\\n-> The proposed approach is simple but reasonable.\\n-> The paper is well written and clear.\\n\\n\\n\\n*Negative points/questions*\\n\\n-> The main problem of the proposed method is the empirical evaluation:\\ni) authors used only three ML datasets of moderate size\\nii) the impact of different parameters of the proposed methods is not analysed. For example, which is the impact of the threshold? (Authors only used 0.8 in all experiments). Further, all trees in the cascading architecture have max depth of 3. What about using longer/shorter trees?\\niii) Comparisons of results in tables are not supported by a statistical test. Is the accuracy of 93.51% (first row table 3) different than 93.16% (second row)? Without a statistical test it is impossible to derive significant conclusions. A possible strategy: shown results are computed using 5 fold cross validation, thus meaning that reported numbers represent the average of 5 repetitions; by repeating the whole procedure 10 times (with different random subdivisions of training and testing), author would have a more robust estimation of the reported numbers (average of 50 tests). Moreover, with such scheme they can also assess the statistical significance of differences by using a (paired) t-test.\\niv) the explanation depths of classic trees are in average very short, can authors comment this? Moreover, which is the maximum length of the classic decision trees? Authors simply reported that they used \\u201cvarious bounds on the depth of the learned tree\\u201d.\\nv) conclusions derived from tables are not completely supported by the tables. For example, in the paragraph \\u201cLow False-positive Rate\\u201d authors say that \\u201cAs shown in Table 3, the cascading decision trees algorithm has the lowest false positive (FP) rate for all three datasets compared to the classical decision trees algorithm.\\u201d. By looking at table 3 this is not evident: it seems to me that with Ionosphere and Sonar the lowest FP is obtained with the variant without cascading (\\u201cOff (max_depth =3)\\u201d). Am I correct?\\nvi) I wonder if the comparison with BioOCT is fair (in terms of classification accuracy), since the maximum depth of this last method has been fixed to three.\\nvii) Comparisons only involve standard trees: what about comparing also to other methods used to reduce the depth of decision trees? This would enlarge the scope of the experiments and the value of the proposed approach. \\n\\n\\n-> I think that the proposed approach can be better inserted into the state of the art. Actually, the combination of different classifiers in cascade is not new in the literature of ensemble classifiers, with many methods introduced. Authors provide comments only on boosting (in Section 3), but many other methods have been presented. Just to provide few pointers to old works:\\n\\nE. Alpaydin and C. Kaynak. Cascading classifiers.KYBERNETIKA, 34(4):369\\u2013374, 1998\\nL. Bruzzone and R. Cossu. A multiple-cascade-classifier system for a robust and par-tially unsupervised updating of land-cover maps. IEEE Transactions on Geoscienceand Remote Sensing, 40(9):1984\\u20131996, 2002\\nLudmila I. Kuncheva: Combining Pattern Classifiers - Methods and Algorithms , Wiley 2004\\n\\nI agree with authors that the goal of their approach may be different (extracting shorter explanations rather than increasing classification performances), but a discussion of different cascading strategies may better contextualize and justify the proposed approach. \\n\\nIn the same spirit I would suggest the authors to discuss techniques for obtaining compact decision trees (pruning etc).\\n\\n\\n-> If we focus on classification accuracies, decision trees do not represent state-of-the-art classifiers, with performances which are very often far away from competitors like SVMs or Neural Networks. This somehow limits the application of these techniques (if not used inside Random Forests).\\n\\n\\n-> Authors provide methods and definitions (e.g. \\u201cvalid explanation\\u201d) for a setting in which we have binary features. Why? If I correctly understand, the definitions can be easily provided also for numerical features (with the usual split based on a threshold), providing a more general framework. Moreover, the experiments are done with datasets having numerical non binary features.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An interesting twist of decision trees that shortens the paths for improved explainability, but there are some concerns with the proposed technique\", \"review\": \"This work proposes to use cascade decision tree models to come up with shorter explanations for the predictions made by the decision trees.\\n\\nThe proposed technique focuses on explaining one class in a binary classification task, i.e., positive samples. The idea is to build a decision tree with predefined depth to classify positive samples, remove those classified positive, as well as negative samples in leaf nodes that are dominated by negative samples, from the dataset, and then repeat this process until the sequence of tree models is built.\\n\\nExplainability of ML models is an important topic, especially for medical scenarios. In addition, shortening the decision tree paths to improve explainability is a promising direction. The writing of the work is also clear and easy to follow.\\n\\nHaving said that, I have the following concerns about this work:\\n\\nW1. The application scenario is narrow. And it is not clear why the explanation path is not the concatenation of all the paths of the cascading trees.\\n\\nD1. The proposed approach only applies to binary classification task. It is not clear how this can extend to multi-class and regression models.\\n\\nD2. Also, even for binary classification, it only explains one class. It is counterintuitive that you need to build different models to explain the classification of the two classes from the same dataset.\\n\\nD3. Since the process of building a subtree is independent of the previous subtrees built, i.e., the features and splits of the tree being built does not consider the features and splits in previous subtrees, it seems to me that the subtree is pre-conditioned by the previous subtrees. It is not clear why it is correct to only use the subtree that the prediction ends for explanation instead of all the subtrees that are used before the prediction ends. \\n\\t\\t\\nW2. The complexity of this decision tree can be high, leading to high inference time. And the model can be overfit to positive samples.\\n\\nD4. Because the cascading model tries to construct leaves with most pure positive samples, the total depth and the number of trees in the model can be quite high. This will especially impact the overhead in the inference.\\n\\nD5. Also, given this tree is built for optimizing the classification of one class with very high accuracy, the model can be overfit to that class, and the prediction accuracy of the other class is not guaranteed.\\n\\nW3. The evaluation needs to be enhanced.\\n\\nD6. The average path length of classic decision trees is already small, i.e., less than 4. The evaluation needs to perform on more complex datasets.\\n\\nD7. In medical scenarios, we would expect the negative data samples are much more than positive data samples. It is already easy to overfit for the positive samples in this scenario, and as mentioned in D5, the technique itself also tends to overfit. Overfitting may not be a good idea for this skewed dataset.\\n\\nD8. It is unclear if the proposed technique gives better explanation without doing a user study. As mentioned in W1 and D3, it is not entirely clear why the explanation does not need to concatenate all the paths in the subtrees before the prediction ends. This is especially a concern since the classic decision tree only has a single tree. It will be great to conduct some user study to understand why the proposed technique impacts the Explainability.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A novel idea that needs to be further polished.\", \"review\": \"The authors presented in this submission a nice novel idea of building a tree ensemble in a cascading style so that any positive predictions are decided and explained by the first tree predicting them positively. The reviewer finds this idea very interesting and clearly elaborated in this paper. However, more theoretical and empirical justification is crucially necessary in order to make the claims in the submission convincing. The issues listed here are some questions that the reviewer believes should have been discussed or answered in the paper.\", \"major_issues\": \"Despite the fact that the focus of cascading decision trees (CDTs) is a short explanatory path, my major concern here is that it is very hard to justify them as a proper statistical model. \\n\\n1.\\nPractically, there are very simple adversarial cases that the cascading decision trees will fail to build. Consider this example that X ~ Unif([0,1]^2) and Y|X == 1 if X \\\\in [1/3, 2/3]^2 otherwise 0. If we apply the algorithm in the paper, using depth=2 trees and a mixed node threshold theta = 0.5 (which I believe are a reasonable choice), the positive-focused CDTs will almost fail to construct the first tree with a decently large sample, let alone the cascading subsequence. On the other hand, since depth=2 trees can cover all rectangles (0,0) - (a,b) \\\\in R^2, the counterpart random forests or GBDTs are universal approximators. \\n\\nBy also checking the negative predictions there might be ways to mitigate this issue. However relevant discussions are lacking in this paper in its current shape.\", \"from_a_more_abstract_perspective_regarding_the_cdts_depth\": \"in a d-dim sample space, to separate out a d-dim cube, we are likely in the need of 2d splits. There might be fewer splits needed when the cube is touching the boundary of the support or there are other nodes made before as in an ensemble. But in general we should not make such assumptions, and it would be better if we could have more discussions in the paper.\\n\\n2\\nClassic classification trees are asymptotically bayesian classifiers, whereas we can imagine asymptotically CDTs assign the positive label to a region only when the true positive rate theta* within the region is larger than the constant threshold theta, which means CDTs are intrinsically inaccurate - more specifically, low recall as mentioned by the authors. This situation is further worsened as CDTs will take positive examples off from the sample. The fact that CDTs are theoretically incapable of finding all positive examples is harming their credibility of giving short explanations to positive predictions. A bandage here might be to use adaptive thresholds, but no discussions are currently present in the submission.\", \"minor_issues\": \"1. Classic CARTs are very sensitive towards outliers. The \\\"split one example out each time\\\" scenario being analyzed in the paper is likely to cause overfitting, chasing the outliers, and instability. It should be avoided. \\n2. CARTs default greedy building algorithm uses entropy or Gini index which are indifferent towards both positive and negative examples. Since the focus in the paper is on positive predictions, it is worth discussing how the algorithm should be changed accordingly.\\n3. It would be better if there were instructions in the paper regarding choosing the mixed node threshold theta, the tree depth, and hopefully the ensemble size. \\n4. Since CDTs are still a tree ensemble, their capacity is expected to be larger than a single decision tree which can even be a bit deeper. The results pertaining to the accuracy, precision and recall in the empirical study session are therefore slightly unfair comparison - it would be better to benchmark against decision tree ensembles or rule-based models [1]. \\n\\n[1] Wang, Fulton, and Cynthia Rudin. \\\"Falling rule lists.\\\" Artificial Intelligence and Statistics. 2015.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
zFM0Uo_GnYE | On the Importance of Looking at the Manifold | [
"Nil Adell Mill",
"Jannis Born",
"Nathaniel Park",
"James Hedrick",
"María Rodríguez Martínez",
"Matteo Manica"
] | Data rarely lies on uniquely Euclidean spaces. Even data typically represented in regular domains, such as images, can have a higher level of relational information, either between data samples or even relations within samples, e.g., how the objects in an image are linked. With this perspective our data points can be enriched by explicitly accounting for this connectivity and analyzing them as a graph. Herein, we analyze various approaches for unsupervised representation learning and investigate the importance of considering topological information and its impact when learning representations. We explore a spectrum of models, ranging from uniquely learning representations based on the isolated features of the nodes (focusing on Variational Autoencoders), to uniquely learning representations based on the topology (using node2vec) passing through models that integrate both node features and topological information in a hybrid fashion. For the latter we use Graph Neural Networks, precisely Deep Graph Infomax (DGI), and an extension of the typical formulation of the VAE where the topological structure is accounted for via an explicit regularization of the loss (Graph-Regularized VAEs, introduced in this work). To extensively investigate these methodologies, we consider a wide variety of data types: synthetic data point clouds, MNIST, citation networks, and chemical reactions. We show that each of the representations learned by these models may have critical importance for further downstream tasks, and that accounting for the topological features can greatly improve the modeling capabilities for certain problems. We further provide a framework to analyze these, and future models under different scenarios and types of data. | [
"Topological Learning",
"GNN",
"VAE"
] | Reject | https://openreview.net/pdf?id=zFM0Uo_GnYE | https://openreview.net/forum?id=zFM0Uo_GnYE | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"KdweH3AfXdd",
"ZM9lGrL9s9A",
"gj1W5-VatE",
"cQ9A4lkx6AM",
"EmdvrOVHYId",
"caC1m8LsR8P"
],
"note_type": [
"decision",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040385597,
1606263953326,
1604277339501,
1603898270909,
1603860336634,
1603733585112
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3592/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3592/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3592/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3592/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3592/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"While the motivation of the paper is interesting the reviewers expressed concerns about the experimental setup, comparison to related work, and paper framing. For experiments, it was unclear why authors compared such disparate methods instead of more fine-grained adjustments (e.g., such as corrupting graphs as suggested by R3). For comparison, other methods such as Deep Walk and VGAE (as suggested by R1) seemed missing. I think the biggest issue however was with framing: as the reviewers pointed out, it was not clear enough how looking at downstream performance relates to looking at the manifold. In fact the paper title is much too general and is also well-known: manifold learning has been around for 15+ years. I would urge the authors to take the recommendations of reviewers and either design new experiments that explicitly target the manifold or reframe the paper to design new evaluation metrics for latent (possibly structured) generative models.\"}",
"{\"title\": \"Answer to the reviewers\", \"comment\": \"Answer to all reviewers.\\n\\nFirst of all we thank all the reviewers for the time and effort dedicated to reading and reviewing our paper. Seeing that several cons were shared across reviewers we decided to address them together.\\n\\nThe intent of the paper was to study the influence of the topology across the scale shown in the paper. The addition of the GR-VAE is motivated by the intent of filling a particular shade of that spectrum.\\n\\nWe appreciate the recommendation of including more more models into the study. We added text baselines for GAE and GraphSAGE in the text study, and we are aware that the study could be further expanded with more models. Furthermore, we expanded the text classification results for the DGI using subsampled graphs (i.e. same sampling method by which GR-VAE is trained) and incorporating the representations of the variational autoencoders as features.\\n\\nWe want to make the same point about the datasets, it has been commented that downstream prediction tasks may not be the optimal comparison method. It was a direct way for us to asses the goodness of the representations in respect to a specific task, however we are aware that can in itself be limiting. We welcome the different suggestions done in the comments (e.g. controlling adjacency or corrupting the input graphs) and we will be looking at them.\\n\\nFinally, just reiterate our thanks for the reviewers' comments and time invested in looking into our work. All the feedback we received is highly welcome and a helpful light on how to improve and iterate this work.\"}",
"{\"title\": \"The idea is interesting but the method and experiments are not convincing\", \"review\": \"**Summary**:\\nThis paper investigates different ways of incorporating topological information about the data in the machine learning models. The paper introduces a novel loss that aims to enforce the relational information between data points into the embedding space learned by a Vae on the node features. The experiments demonstrate that for data with a certain topology type, the introduced loss can provide performance when used together with existing methods. The paper opens up possibilities of further investigation into incorporating topological information (if available) into the learning procedure.\\n\\n**Pros**:\\n1. The paper is very well-written and easy to follow. The illustrations also present the idea clearly.\\n2. The problem of understanding the importance of topological information is interesting, and could lead to future works.\\n\\n**Cons**:\\n1. I am not sure if there is enough novelty for acceptance to ICLR, especially when the proposed method does not provide obvious benefit over existing methods, both in theory and practice. Specifically, the GR-VAE is a simple extension of VAE which does not yield good performance, unless it\\u2019s combined with DGI. Even in that case, it is not obvious to me the benefit is significant except in one case (namely for text representations), but when combined with DGI, it becomes unclear whether the performance boost is actually coming from the proposed loss or some other unintended regularization effect since DGI already uses message passing to incorporate the (local) topological structure. Am I missing something here?\\n2. Since the paper is positioned to be an experimental study, it is perhaps acceptable to have limited novelty or improvement. However, the findings in the papers are somewhat expected. For example, we already know that GNN [1] based methods are superior to other methods on citation benchmarks since they account for both features and topology. In this light, I feel there is not too much new insight in the paper to warrant a publication at ICLR.\\n3. For an empirical study, considering only four models may not be enough (e.g. different GNNs / graph models encode different topological information and that should be taken into account for a full spectrum). The same holds for feature-based methods. Some examples of this are Deep walk [3], GNN architecture or different VAE architecture and loss (especially there are so many variations of VAE). Most importantly, a comparison to [2] is missing.\\n\\n**Comment**:\\n1. \\u201cRegularized\\u201d -> \\u201cregularizer\\u201d in conclusion line 6 paragraph 2?\\n2. For a double-blind reviewing process, the funding information should perhaps be removed, although the one in this paper does not expose the authors' identity.\\n\\n**Conclusion**:\\nWhile the work has interesting motivations and is well-written, it has not done a convincing job at demonstrating the effectiveness of their proposed method or shown a thorough experimental analysis. As such, I am inclined to reject the paper in its current form.\\n\\n**Reference**:\\n\\n[1] Semi-Supervised Classification with Graph Convolutional Networks (https://arxiv.org/abs/1609.02907)\\n\\n[2] Variational Graph Auto-Encoder (https://arxiv.org/abs/1611.07308)\\n\\n[3] Deepwalk: Online learning of social representations (https://arxiv.org/abs/1403.6652)\\n\\n\\n**================================== Update after rebuttal ==================================**\\n\\nIt seems that the authors have only provided a general comment for all reviewers, which is understandable since most criticisms from all reviewers are on the same weaknesses of the paper.\\nWhile I appreciate the author's effort in adding more experiments, I do not think the added experiments and reply properly addressed the concerns shared by other reviewers and myself. For example, it is still unclear what the advantage of the proposed method is or what insights we could gain from this study.\\nI think this paper is not ready for publication. As today is the last day of discussion period, I will maintain my original assessment.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review for On the Importance of Looking at the Manifold: Reject\", \"review\": \"Summary:\\n\\nThe authors present a regularisation term for Variational Auotencoders that forces the distance of mapped points in the embedding space to be similar to the distance of those points in the metagraph of the data derived from relational information about these points. The intention of the regularisation term is to enforce a consistent graph between the original representation and the embedded representation in a manner that is agnostic to the structural choices of the model to be estimated.\", \"strengths\": \"1) The paper provides a good organisation of existing methods for utilising topological information, and, thus, positions its contributions well in relation to existing work.\\n\\n2) The empirical results are presented honestly, even when they do not support the proposed method.\", \"weaknesses\": \"\", \"ordered_form_less_to_more_specific\": \"1) The intention of the paper is unclear.\\n\\tIs this intended as a review paper of recent research on graph neural networks or to present a new regularisation term? The title and abstract seem to imply that this is intended as a review, but the paper only considers three existing methods and a single modification to VAEs. Unfortunately, the paper is not convincing in either regard, and the idea of analysing topological information to improve classifications and representations is already well discussed in the literature and a very active area of ongoing research.\\n\\n2) The efficacy of the proposed regularisation is not convincingly supported either theoretically or empirically.\\n\\tThe authors state, \\u2018Notably, GR-VAE is devised to infer topological information solely from a soft constraint, without any architectural requirements such as graph convolutions\\u2019 (line 1, p. 4), but this is not discussed further. The intention of graph convolutions is to explicitly encode assumptions about the relationships present in the data, in this situation why would I prefer a soft constraint to a well motivated, explicit one? If structural constraints were unduly restricting the expressiveness of the models, I would expect to see this borne out in the empirical results, but this is not the case across the chemical reaction and citation network experiments. \\n\\n3) The paper struggles with clarity at points.\\n\\tSpecifically, equation (1), which describes the regularisation term is unclear as written: does the plus sign in the exponent denote absolute value or something else? This notation is non-standard and I would not be able to faithfully recreate the results as written. \\n\\tAdditionally, the method for constructing the meta-graph G should be discussed in more detail. From what is this graph derived? Is it an existing observed graph that describes the observed relationships between the data, ex. the citation network or pixels in an image, or does it describe adjacency of the observations as defined by the observed labels or other meta-information. My concern is that using a soft-constraint which effectively focusses the model on the labels in an \\u2018easy\\u2019 task such as MNIST classification or the synthetic data task hides the fact that the constraint is too soft to produce a useful regularisation of the model, as evidenced by the failure of GR-VAE relative to DGI or even vanilla VAE in the citation network and chemical reactions tasks.\", \"reasons_for_score\": \"I vote for rejecting the paper, as while I really do appreciate that the results of the paper are presented honestly, I think there are concerns with the current draft.\", \"questions_for_the_rebuttal_period\": \"Please refer to the questions in the weaknesses section.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A well-written paper, idea is simple, but experiment might be insufficient\", \"review\": \"==== Summary ====\\n\\nThis paper proposes a variant of Variational Autoencoders (VAEs) which takes extra topological information (e.g. adjacency matrix) into account in the loss function during training. The principle objective of the proposed GR-VAEs is to use the learned latent space features as the input for improvement on downstream tasks especially for classification. \\n\\n==== Pros ====\\n\\n+ This paper is well-written and the idea is clear and easy to follow.\\n+ The proposed GR-VAE, specifically the extra loss function on preserving the geodesic distance seems to be effective for learning desired latent space features from experiments.\\n\\n==== Cons ====\\n\\nMy main concern is on the experiment on showing the improvement for downstream tasks from the embedding learned by GR-VAE.\\n\\n- I think the goal of the experiment is to demonstrate that, the embedding learned by GR-VAE is superior comparing to other kinds of features such as from the vanilla VAE, so I expect the embedding by GR-VAE is applied to different models (e.g. GraphSAGE, GCN, DeepWalk) instead of just DGI (at least I only notice that DGI is adapted), and on each model the result from using GR-VAE can outperform others using raw data features or other kinds of finetuned features. I think showing the improvement on different models can greatly enhance the soundness of this paper.\\n\\n- In addition, for the experiment on chemical reaction I think there should be another baseline showing the result of DGI by using the raw data features if possible, this can further demonstrate the importance of GR-VAE. Also, for the experiment on text representation I also expect some result like using finetuned GR-VAE as the input for DGI in the chemical reaction experiment, currently the text representation experiment only shows the strength of the DGI itself, which does not make sense to me.\\n\\n==== Reason for scoring ====\\n\\nOverall, I think the proposed GR-VAE is sound if its strength can be demonstrated by more experiment mentioned above, and I am willing to upgrade my rating and vote to accept if such concern can be addressed during the rebuttal period.\\n\\n==== Minor Comments ====\\n\\n- The plots in Figure 3 is too blurry to distinguish between the cross marker and round marker.\\n- I notice for the synthetic dataset the direction of edges for each node is used as part of the input features, so what is the definition for the edge direction? Also, if we directly combine the raw data feature with embedding by some manifold learning technique, and input it into the vanilla VAE, can we get similar result (the graph topology is preserved) as GR-VAE has?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper studies the importance of utilising manifold/topology information in prediction tasks. The method is not novel enough and the comparison seems problematic.\", \"review\": \"The paper focuses on studying the importance of utilising manifold/topology information for machine learning tasks. To this end, the authors benchmark four different approaches, including VAE, GR-VAE (using graph distances to regularise embedding distances (as shown in Eq. 1)). The paper performs experiments on four tasks, including synthetic data, MNIST, text representation, and chemical reactions. As conclusion, the paper demonstrates that in some cases, adding relational information is beneficial, while in other cases, the effect is subtle. Thus, the paper aims to provide a metric for understanding when and how manifold/topology information is needed.\", \"pros\": \"1. Instead explicitly learning a graph, this paper proposes an implicit method with graph regularisation.\\n\\n2. The related work is well-explained. The paper did a well summarization of previous methods.\\n\\n3. The paper performs extensive experiments to study the importance of manifold for prediction tasks.\", \"cons\": \"1. The latent graph method is not novel enough, since the method can be categorised as graph regularisation, which is a widely used method in recommendation and information retrieval. Could the author explain why this regularisation is picked from a spectrum of graph regularisation algorithms?\\n\\n2. The comparison of the paper is problematic. First, the methods (DGI, node2vec, GR-VAE, VAE) compared are quite different methods. Can the authors confirm that the comparison is fair and meaningful (e.g. eliminating other confounding factors like controlling the number of parameters)? Second, I am not sure whether this comparison is optimal. In particular, to study the importance of relational information, other method can be used. For example, we can control adjacency matrix received by graph neural networks. We can totally ignore the edge information (like VAE in the paper) or use a predefined graph (e.g. a fully connected graph like in Transformer). In between, we can corrupt input graphs (e.g. randomly adding or deleting some edges) before feeding it to graph neural networks. This approach seems more reasonable to me for studying the importance of manifold. It is difficult to control these in this paper because the methods used in this paper are totally different (e.g. DGI and GR-VAE differs in both loss function and input format). So, the conclusion of the paper is skeptical. It mainly justifies which method can perform better in downstream tasks instead of justifying the importance of manifold. \\n\\n3. The introduction is lengthy and should be more focused on the contribution of this paper. Similarly, the other sections need a major revision to highlight the contribution, as the main contribution of the paper lies in the implicit graph regularisation and a comparison of a series of methods with/without relational information.\\n\\n4. Some baseline methods are not considered, for example the methods learning latent graphs: Semi-supervised classification with graph convolutional networks and Glomo: Unsupervised learning of transferable relational graphs.\\n\\n5. The acknowledgement of the paper reveals location information, which may be a violation of anonymity. \\n\\nBased on these cons, I think a more rigorous comparison is needed.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
Te1aZ2myPIu | Pretrain-to-Finetune Adversarial Training via Sample-wise Randomized Smoothing | [
"Lei Wang",
"Runtian Zhai",
"Di He",
"Liwei Wang",
"Li Jian"
] | Developing certified models that can provably defense adversarial perturbations is important in machine learning security. Recently, randomized smoothing, combined with other techniques (Cohen et al., 2019; Salman et al., 2019), has been shown to be an effective method to certify models under $l_2$ perturbations. Existing work for certifying $l_2$ perturbations added the same level of Gaussian noise to each sample. The noise level determines the trade-off between the test accuracy and the average certified robust radius. We propose to further improve the defense via sample-wise randomized smoothing, which assigns different noise levels to different samples. Specifically, we propose a pretrain-to-finetune framework that first pretrains a model and then adjusts the noise levels for higher performance based on the model’s outputs. For certification, we carefully allocate specific robust regions for each test sample. We perform extensive experiments on CIFAR-10 and MNIST datasets and the experimental results demonstrate that our method can achieve better accuracy-robustness trade-off in the transductive setting. | [
"Adversarial Robustness",
"Provable Adversarial Defense",
"Sample-wise Randomized Smoothing."
] | Reject | https://openreview.net/pdf?id=Te1aZ2myPIu | https://openreview.net/forum?id=Te1aZ2myPIu | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"LmiQWlu_56Z",
"DwgB5ICrs2-",
"R3ybHj6QLFs",
"aE4yMXl-2wl",
"vNIrJZmGwTx",
"fb8vUdj8pCO",
"DLSvZv-kTu",
"QvhnP-lpQ2l",
"O_x2aTsptkV",
"pr4AAm4ZEGt",
"U7vgvB00KNk"
],
"note_type": [
"comment",
"comment",
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1615705908230,
1615627656497,
1610040513100,
1606204543821,
1606204472258,
1606204320951,
1606204182692,
1604962436802,
1604427639719,
1603820630577,
1603785716486
],
"note_signatures": [
[
"~Lei_Wang22"
],
[
"~Mao_Ye12"
],
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3591/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3591/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3591/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3591/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3591/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3591/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3591/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3591/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Response to the robust radius theorem.\", \"comment\": \"Thanks for your question.\", \"your_concern_is_right_and_we_also_mention_this_issue_in_the_introduction\": \"a certain robustness radius around a point can be certified only if all points within the radius are assigned the same Gaussian variance.\\n\\nTo address this issue, we divide the input space into 'robust regions' and the samples in the same region are assigned the same noise level. Further, we make sure that the certified l2-ball does not exceed the 'robust region' it falls in so that the theorem still holds in the region.\"}",
"{\"title\": \"Does the robust radius Lemma still holds for the algorithm in this paper?\", \"comment\": \"Hi Authors,\\n\\nThanks for your interesting papers. I have one questions on your paper.\\n\\nSince the noise level sigma actually depends on the input image x, thus we can view sigma as a function of x. In this case I feel the key robust radius lemma no more holds. The reason is that, given x, for some x' in its l2 balls, the sigma for x' might be different than x and thus result in Cohen's paper seems no more applicable.\\n\\nCould you clarify that?\\n\\nThanks.\"}",
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": [\"The paper considers an extension of randomized smoothing where the smoothing noise may differ for different points. The resulting method shows good performance experimentally. However, the reviewers raised a number of problems which, at the moment, precludes the acceptance of the paper, such as the following:\", \"The paper analyzes the transductive setting, where all the test points are available to fine-tune the smoothing parameters of the predictor. It is not clear how this setting corresponds to a real adversarial threat model, and whether the final tuning needs to use the perturbed or unperturbed points. In the first case, the resulting certified radius is different from what is normally used in the literature, while in the latter it is not clear how the method would be useful to mitigate any real adversarial attack.\", \"A related comment is that the paper should explain (and state) properly how the results of Cohen et al. (2019) are applicable to compute the certified radius, which would also provide a proper explanation why partitioning is used.\", \"The training cost of the procedure seems very high, and this is not discussed.\", \"The clarity of the presentation should be improved.\"]}",
"{\"title\": \"Response to AnonReviewer5\", \"comment\": \"Thanks for your detailed and insightful comments. Here are the responses to your concerns.\\n\\n1. **Evaluation**. In the prediction procedure, there are indeed some differences: we present our results in the transductive setting. We assume that the test dataset is known except labels. Hence, we allocate regions and assign different sigmas based on the results of the test dataset. Instead of caching the clean image + label, we only guarantee the points in the same region use the same standard deviation. As for adversarial image, it can be any perturbed image and we can assign a suitable standard deviation for it according to its location.\\n2. **Regions Allocation**. According to the proof of the randomized smoothing theorem in (Cohen et al., 2019)[1], we cannot assign arbitrary noise level to any test point; a certain robustness radius around a point can be certified only if all points within the radius are assigned the same Gaussian variance. So, we use different regions $B$ in the prediction procedure and the datapoints located in the same region are assigned the same standard deviation. In our experiments, the fraction of the test points in an existing $B_i$ is very small.\\n3. **Certified Robust Radius**. As the Smooth-Adv model is trained with $\\\\sigma=0.25$, the distances between most noisy\\n samples and the clean sample are within $3\\\\sigma$. Among these noisy points, if only 50% of the predictions are correct, then the robust radus is significantly small, nearly 0. However, if 99% of the predictions are correct, we can get a larger robust radius(nearly 0.95) according to (Cohen et al., 2019)[1]. So, we do not only focus on the robustness with radii significantly below 0.25.\\n4. **Ablation for Finetuning**. We have not conducted ablation for finetune procedure as we treat pretrain-to-finetune as a whole, and it is an interesting future direction.\", \"references\": \"[1] Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. arXiv preprint arXiv:1902.02918, 2019\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"Thanks for your detailed and insightful comments. Here are the responses to your concerns.\\n\\n1. In this work, our sample-wise method is applied to the current optimal model Smooth-Adv, but the method we propose does not depend on Smooth-Adv. We believe that our method can be used as a tool and directly applied to other methods to improve certifiable robustness.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thanks for your detailed and insightful comments. Here are the responses to your concerns.\\n\\n1. **Regions Allocation**. We first explain the necessity of the allocation of regions in the prediction step. According to the proof of the randomized smoothing theorem in (Cohen et al., 2019)[1], we cannot assign an arbitrary noise level to any test point; a certain robustness radius around a point can be certified only if all points within the radius are assigned the same Gaussian variance. Otherwise, it is not certifiable. So, we allocate regions in prediction to ensure correctness.\\n2. Allocating a region for every single test point is also fine, but these regions would be smaller to ensure there is no intersection between different regions.\\n3. **$B_{i_j}$**. $B_{i_j}$ stands for the robust region for the test point $x_{test}^j$. It belongs to a larger region $B_i$. All points in the region $B_i$ use the same sigma. \\n4. **Necessity of Pretraining**. For the pretrain-to-finetune framework, if we remove the pretrain-to-finetune framework which in fact is the Smooth-Adv-diff which is shown in appendix B.2, it performs nearly the same as the original Smooth-Adv. We think that Smooth-Adv-diff is trained with a fixed sigma, even if applying the sample-wise method in prediction, the model still prefers the sigma used in training which makes the sample-wise ineffective. \\n5. **Varying Sigma**. Since different standard deviations are used in the testing process, we use the varying sigma in training expecting that the model can find the best sigma for different points. The finetune procedure is to select the best sigma based on the results of the pretrain process, hence, if we do not pretrain a base model, we cannot even select sigmas. About using Smooth-adv as the pretrain model, we think that the finetune model may prefer the sigma used in pretraining which may limit the performance of the model.\\n6. **Noise Level**. The reason that we always choose sigma 0.12 as the starting point is that Smooth-Adv uses 0.12 as their smallest noise level. In order to maintain the consistency of the noise range, we directly use 0.12 as our minimum noise.\", \"references\": \"[1] Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. arXiv preprint arXiv:1902.02918, 2019\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"Thanks for your detailed and insightful comments. Here are the responses to your concerns.\\n\\n1. **Computation Cost**. In our method, the computation cost mainly comes certification procedure both in training and prediction. Take CIFAR-10 for example. In training, we have to certify every train set datapoint, which is 5 times the test set size. We choose 0.05 as the sigma interval which means we have to certify 20 different sigmas and we only use 500 samples which are 1/200 of 10w samples. So, it costs nearly $5 * 20 * 1/200 = 0.5$ times comparing with certification procedure on test set proposed by (Cohen et al., 2019)[1]. In prediction, we can also first use 500 samples to assign sigma and then use 10w samples in certification, so it costs nearly 1.1 times the time of certification. To sum up, taking the pretrain-to-finetune framework into account, it is nearly 2 times the computation cost compared with Smooth-Adv.\\n \\n2. **Implementation on Vanilla Gaussian Augmented Models**. We implemented our framework based on the (Cohen et al., 2019)[1]. For CIFAR-10 and MNIST, we use the maximum noise level 1.00, and the results are shown below:\\n\\n | Dataset | Model | 0 | 0.25 | 0.50 | 0.75 | 1.0 | 1.25 | 1.5 | 1.75 | 2.0 | 2.25 | ACR |\\n | -------- | ----- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | --------- |\\n | MNIST | Cohen | 0.95 | 0.92 | 0.87 | 0.81 | 0.72 | 0.61 | 0.50 | 0.34 | 0.20 | 0.10 | 1.417 |\\n | MNIST | Ours | 0.99 | 0.99 | 0.97 | 0.93 | 0.85 | 0.74 | 0.60 | 0.42 | 0.25 | 0.12 | **1.609** |\\n | CIFAR-10 | Cohen | 0.47 | 0.39 | 0.34 | 0.28 | 0.21 | 0.17 | 0.14 | 0.08 | 0.05 | 0.03 | 0.458 |\\n | CIFAR-10 | Ours | 0.70 | 0.67 | 0.58 | 0.48 | 0.38 | 0.30 | 0.24 | 0.18 | 0.14 | 0.10 | **0.900** |\\n\\n From the results, our method achieves a significant improvenent over ACR. In particular, it outperforms (Cohen et al., 2019)[1] significantly on CIFAR-10 with small $l_2$ radius. For ACR on CIFAR-10, our sample-wise method based on (Cohen et al., 2019)[1] outperforms Smooth-Adv but is still worse than our sample-wise method based on Smooth-Adv.\\n\\n3. According to the results on CIFAR-10 reported in (Salman et al., 2019)[2] (as follow), our sample-wise method outperforms the pre-training and semi-supervised methods when the $l_2$ radius is larger than 0.75. It would be interesting to see how much gains can be achieved with these two methods especially with smaller $l_2$ radius and we leave it as an interesting future direction.\\n\\n | Model | 0.25 | 0.5 | 0.75 | 1.0 | 1.25 | 1.5 | 1.75 | 2.0 | 2.25 |\\n | ------------------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ | ------ |\\n | SmoothAdv | 73 | 58 | 48 | 38 | 33 | 29 | 24 | 18 | 16 |\\n | + Pre-Training | 80 | 62 | **52** | 38 | 34 | 30 | 25 | 19 | 16 |\\n | + Semi-supervision | 80 | **63** | **52** | 40 | 34 | 29 | 25 | 19 | 17 |\\n | + Both | **81** | **63** | **52** | 37 | 33 | 29 | 25 | 18 | 16 |\\n | Ours | 74 | 61 | **52** | **45** | **41** | **36** | **32** | **27** | **23** |\\n\\n4. **Settings**. The results showing in Sec 5.2 are obtained in the online setting.\\n\\n5. **Sigma Interval**. Sigma interval mainly controls the computation cost. If we halve the interval, it takes double time to allocate the standard deviation. As for ACR/ACA, we suspect that they would be improved as we can choose sigma more accurately.\", \"references\": \"[1] Jeremy M Cohen, Elan Rosenfeld, and J Zico Kolter. Certified adversarial robustness via randomized smoothing. arXiv preprint arXiv:1902.02918, 2019\\n\\n[2] Hadi Salman, Jerry Li, Ilya Razenshteyn, Pengchuan Zhang, Huan Zhang, Sebastien Bubeck, and Greg Yang. Provably robust deep learning via adversarially trained smoothed classifiers. In Advances in Neural Information Processing Systems, pp. 11289\\u201311300, 2019.\"}",
"{\"title\": \"A method for adaptive smoothing parameters in randomized smoothing\", \"review\": \"This paper suggests an extension of randomized smoothing, wherein the degree of smoothing is optimized both at training and test-time on each individual sample. At training time, the model is first \\\"pre-trained\\\" using a range of smoothing parameters (variance of the Gaussian perturbations), and then \\\"fine-tuned\\\" by selecting the variance on each sample which maximizes the verified radius. At test time, we can again select the smoothing parameter to maximize robustness.\", \"pros\": [\"Numerically, the results seem fairly strong\"], \"cons\": \"- It's unclear to me whether the evaluation is fair\\n\\n\\nA few (somewhat critical) questions:\\n\\n1. (Major) For the test-time procedure, this procedure selects $\\\\sigma$ based on a computed robustness statistic. I assume that this robustness statistic uses the original image, as in other randomized smoothing approaches? (as opposed to an adversarially perturbed image). If so, this comparison seems somewhat unfair - the typical threat model is that the classifier does not get to first see the nominal image (otherwise, the classifier could cache the clean image + label, and use a nearest-neighbor lookup against its cache to handle any adversarial images.) If not, could you explain how the adversarial image is selected here?\\n\\n2. What is the purpose of the balls $B$ in the section on \\\"Predicting Procedure.\\\"? What do they add compared to computing $r^j$ directly? For what fraction of the test set is an existing $B_i$ found including the test point? (I would expect this fraction to be very small?)\\n\\n3. It seems that for e.g. the SmoothAdv model trained with $\\\\sigma = 0.25$, we should be interested in robustness with radii significantly below 0.25 (and certainly not above it). Am i misunderstanding the naming of the models?\", \"minor_points\": [\"It would be interesting to see an ablation of whether the fine-tuning phase helps.\", \"The presentation of the algorithm could be significantly simplified (lots of notation is unnecessarily complicated, double subscripting, going into details before explaining the idea, lots of new symbols introduced throughout, etc.). The pseudocode is very helpful.\"], \"overall\": \"It's clear that the authors have put significant effort into this submission, but I believe it does not currently meet the necessary bar for ICLR, though I may adjust my rating if the rebuttal satisfactorily addresses the points above. I hope some of this feedback will be useful to the authors.\", \"edit\": \"Thanks for the clarifications. Unfortunately, none of the responses are enough for me to update my rating.\", \"one_thing_regarding_point_1_in_particular\": \"the transductive setting seems contrived for adversarial robustness as it does not seem to correspond to a plausible threat model. It's true that in the transductive setting, the examples don't have labels, but since clean accuracy >> robust accuracy, just caching predicted labels on the clean examples is roughly as good (which can be done even if test labels are not available).\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Nice but straightforward extension of existing method\", \"review\": \"The paper propose a method to improve the randomized smoothing algorithm for certified robustness against adversarial attacks.\\nThe idea is that, instead of adding the same Gaussian noise to every data points, it uses a different standard deviation for each data points. When an example is far away from the decision boundary, one can add more noise.\", \"pros\": [\"Certified robustness is an important problem in adversarial ML, and randomized smoothing is one of most promising methods.\", \"The proposed method is intuitive and seems to be a practical way to improve the original randomized smoothing algorithm\", \"Experiments show that, the certified accuracy on CIFAR-10 really increases\"], \"cons\": [\"It seems to me that the proposed method is a relatively straightforward extension from the original randomized smoothing algorithm, so the technical contribution is limited.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Interesting work but requires more clarifications\", \"review\": \"This paper considers the problem of provably defense to adversarial perturbations using randomized smoothing. The authors propose sample-wise randomized smoothing -- assigning different noise levels to different samples. They also propose to first pretrain a model and then adjust the noise for higher performance based on the model\\u2019s outputs. Experiments show that proposed approach improves the performance of randomized smoothing with same noise level for small perturbations.\", \"pros\": \"1)\\tThe paper is well written and easy to read.\\n2)\\tThe idea of sample-wise randomized smoothing is interesting, and results are reasonable.\\n3)\\tIssues with assigning arbitrary noise level to test points is well described/thought and solutions (online and batchwise) are proposed to make it compatible. \\n4)\\tExperimental setup is comprehensive and appropriate ablation studies have been performed.\", \"cons\": \"1)\\tMy main concern with this work is that it is not clear to me that these ACR gains are being achieved at what cost? It appears that sample-wise randomized smoothing adds an additional computational complexity during both training and prediction/certification phases. I would like to see the train and prediction cost comparison with standard train/test, MACER, and vanilla random smooth model. This comparison will provide a better insight into the performance as on some cases, e.g., MNIST, the sample-wise RS performs pretty close to the baselines. I will argue that in the computation cost on sample-wise RS is significantly higher than the baseline robust approaches, one can simply increase the m_test in those approaches. \\n2)\\tIt will insightful to see how much gain the proposed scheme achieves with vanilla gaussian augmented models (authors only show these results with smooth adversarially trained models).\\n3)\\tSimilar to adv-smooth, it will be useful to see how much gain can be achieved with: 1) pre-training, and 2) semi-supervised learning. \\n4)\\tResults in Sec 5.2 is for online or batch setting?\\n5)\\tHow does resolution of grid or \\\\sigma_interval impact the performance (train and prediction/certification time and ACR/ACA)?\", \"minor\": \"1)\\tThere seems to be typo in Sec 5.1: [0, 12, 0.25].\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Motivation and explanations of methodology could be improved\", \"review\": \"This paper proposes an improved sample-wise randomized smoothing technique, where the noise level is tuned for different samples, for certification of robustness. Further, it also proposes a pretrain-to-finetune methodology for training networks which are then certified via sample-wise randomized smoothing. The authors show in experiments on CIFAR and MNIST that combining their training methodology and certification methodology can sometimes improve the average certified when compared to state-of-the-art randomized smoothing techniques Smooth-Adv (Salman et. al, 2019).\\n\\nI recommend a rejection because the key takeaways of the paper should be clarified and the pretrain-to-finetune framework and the allocation of regions must be explained and justified better.\\n\\nThe key idea of using different noise levels for different samples is intuitive and explained well in the motivation section (4.1). Furthermore, the authors show that their methodology does indeed lead to minor improvements in average certified l2-radius on Smooth-Adv for the CIFAR dataset, which is a more interesting dataset than MNIST, where the proposed technique performs similarly or slightly worse than Smooth-Adv.\\n\\nHowever, the paper does have shortcomings in its clarity and organization. First, I think the sample-wise certification is a clear and well-motivated idea, and should be discussed as the major contribution, rather than the pretrain-to-finetune framework. Furthermore, I was confused about the allocation of regions in the prediction step of the sample-wise certification; explaining why it is necessary, and why it is better than allocating a region for every single test datapoint (which is what I thought the motivation section in 4.1 explained) would improve the paper significantly. Finally, the amount of notation in the paper should be simplified significantly, and the notation often makes the paper more confusing (and sometimes, I could not understand due to either incorrect or unclear notation). For example, the pseudocode in Algorithm 1 would have been better if the notation was simplified, and in Algorithm 2, I did not know what B_{i_j} referred to at all.\\n\\nSpecifically regarding the allocation of regions, I did not understand why it was necessary or led to improvements over choosing a new region for each test datapoint. Explaining it clearly, and showing an ablation study that compares using region-allocation and not using region-allocation would provide good motivation for its use.\\n\\nSpecifically regarding the pretrain-to-finetune framework, I have the following questions:\\nI saw that in Appendix C that the pretrain-to-finetune framework is necessary for the sample-wise randomized smoothing to show an improvement. Are there explanations for why sample-wise randomized smoothing does not well work by itself?\\n\\nWhy does it make sense to do this 2 step procedure? Why does the pre-training have to involve varying noise levels if the fine-tuning procedure already finds the optimal noise level for each sample to train with? Could the pre-training just be the same as Smooth-Adv?\\n\\nHow much does it matter which noise levels we choose during the pre-training phase? I noticed that the authors usually chose noise from 0.12 up to the amount that they compare to with SmoothAdv, but the reasons for this are not discussed.\\n\\n\\nOverall, I feel that the paper has a well-motivated idea (sample-wise randomized smoothing) and shows some minor improvements in terms of results, but that clarity for all other parts of the paper must be improved significantly.\", \"post_rebuttal_update\": \"I appreciate the author response, but I will maintain my score after reading the rebuttal and discussion with other reviewers. It still appears to me that the motivation and clarity can be improved, and so I would recommend focusing on those aspects in future revisions. Additionally, baselines such as \\\"allocating a region for every single test point\\\" should be compared to in a clear way (as opposed to being in the appendix), as such baselines seem natural to compare to.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
bnuU0PzXl0- | Evaluating Gender Bias in Natural Language Inference | [
"Shanya Sharma",
"Manan Dey",
"Koustuv Sinha"
] | Gender-bias stereotypes have recently raised significant ethical concerns in natural language processing. However, progress in the detection and evaluation of gender-bias in natural language understanding through inference is limited and requires further investigation. In this work, we propose an evaluation methodology to measure these biases by constructing a probe task that involves pairing a gender-neutral premise against a gender-specific hypothesis. We use our probe task to investigate state-of-the-art NLI models on the presence of gender stereotypes using occupations. Our findings suggest that three models (BERT, RoBERTa, and BART) trained on MNLI and SNLI data-sets are significantly prone to gender-induced prediction errors. We also find that debiasing techniques such as augmenting the training dataset to ensure that it is a gender-balanced dataset can help reduce such bias in certain cases. | [
"Natural Language Inference",
"Natural Language Understanding",
"Natural Language Processing",
"Gender Bias",
"Societal Bias",
"Bias",
"Ethics",
"Debiasing Techniques",
"Data Augmentation"
] | Reject | https://openreview.net/pdf?id=bnuU0PzXl0- | https://openreview.net/forum?id=bnuU0PzXl0- | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"yDAeem4_hm",
"Qla4zJrFS5k",
"YG4XU-X43E",
"AIgeZQkrf_-",
"_cSrCtAbOfz",
"4Rzo9Xxrzsj",
"yHj9RyxNfhg",
"7vO3nScdg6F",
"rHxs1nuoYTL",
"8L7hSvLM738"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040500368,
1605987639394,
1605607096693,
1605555539019,
1605554099796,
1605552713828,
1605551886871,
1603948332180,
1603944000598,
1603782904403
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3590/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3590/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3590/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3590/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3590/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3590/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3590/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3590/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3590/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper offers a new dataset and accompanying metric to measure the degree to which NLI (textual entailment) systems are aware of gender\\u2013occupation associations.\", \"pros\": [\"The paper deals with an important issue in the context of a visible set of models and datasets.\"], \"cons\": [\"The metric is designed to evaluate bias on models trained for a specific, precisely defined task, but it does not conform to the standard formulation of that task, which makes results on those metric untrustworthy and potentially arbitrary. Reviews had concerns about both the data (the use of references to the form of the premise text) and the metric (the handling of 'neutral' predictions).\", \"The proposed definition of bias is not clearly mapped onto a concrete potential harm.\", \"There has been substantial similar prior work on this problem. This doesn't invalidate this work, but it does raise the bar a bit, since arguments of the form 'we need to start a conversation about bias in models' are not pursuasive.\"]}",
"{\"title\": \"Response to reviewer 3\", \"comment\": \"We would like to inform the reviewer that our evaluation set consists of both kinds of gender-specific premises: male and female, thus in a way both entailing and contradicting the hypothesis wrt the stereotypes.\\nFor eg. for a hypothesis: \\\"The guard was at the building\\\", we have premise 1: \\\"The text mentions a male occupation\\\" and premise 2: \\\"The text mentions a female occupation\\\". A biased model would predict entailment for premise 1 and contradiction for premise 2. However, an ideal model should have the same prediction for both cases. \\n\\n\\\" Although the experiments consider overlapped words, it is still not convincing for me since the training domain and the evaluation domain are still quite different\\\": We understand the reviewer's concern regarding the structure of hypothesis. \\nHowever, our experiments aim at indicating the presence of bias in the models' predictions and the improvement in the predictions after debiasing the model by training it on a gender-balanced training set and evaluating it on the same evaluation set is, in our opinion, an indicator of bias.\"}",
"{\"title\": \"Response to some points\", \"comment\": [\"Thanks for your response.\", \"For (2). \\u201cThe constructed dataset contains only entailment pairs. How about analyzing the contradiction cases as well?\\u201d I mean that you only design an evaluation dataset for entailments. However, there can be some bias when the model makes contradiction predictions. Is your method able to extend in that case?\", \"For (4). Although the experiments consider overlapped words, it is still not convincing for me since the training domain and the evaluation domain are still quite different.\"]}",
"{\"title\": \"Revised paper and supplementary materials have been uploaded\", \"comment\": \"We appreciate the reviewers' valuable comments and insightful feedback. Following reviewers' advice, we updated the manuscript. Below is a summary of the revision:,\\n\\n1. We changed the kind of plot used in Figure 1 and the corresponding figures in the Appendix.\\n2. We conducted two more experiments to address the reviewer's concerns regarding the structure of hypothesis. The experiment details and results have been added in the appendix (added as a part of the supplementary material)\\n3. We've fixed the minor corrections (e.g. guard->teacher in Table) and polished our writing in certain places to make the paper clearer.\"}",
"{\"title\": \"Response to reviewer 3\", \"comment\": \"We thank the reviewer for their detailed and helpful feedback. Below we try to respond to their comments. We've accordingly updated the paper in order to make it clearer.\\n\\n1. We acknowledge the concern raised by the reviewer and agree that the sentences are not realistic in some cases however, the occurrence of such sentences is minimal. \\n\\n2. \\u201cThe constructed dataset contains only entailment pairs. How about analyzing the contradiction cases as well?\\u201d We kindly request the reviewer to please elaborate more on this point.\\n\\n3. The models were trained for all three labels and the output was later normalized over entailment and contradiction. We normalize over entailment and contradiction probabilities to \\u201cinvestigate if the model predicts the textual entailment to be \\\"definitely true\\\" or \\\"definitely false\\\".\\u201d The idea is to measure an upper bound of bias by investigating whether the model is confident of either entailment or contradiction, since a neutral prediction would indicate an unbiased model. On an average neutral was predicted 57% of times across our experiments.\\n\\n4. \\u201dWhy not consider the existing hypothesis? For instance, replace \\\"he\\\" or \\\"his\\\" in the existing hypothesis with \\\"she\\\" or \\\"her\\\" :\\nWe acknowledge the reviewers concern and we note that gender swapped experiments have been explored extensively in prior works for bias investigation in other NLP tasks [1][2]. \\nWe have additionally conducted two more experiments, the results for which have been updated in the appendix. (Please refer to the response to R1.)\\n\\n5. B is the number of times the entailment probability of the pro-stereotypical hypothesis was greater than its counterpart(anti-stereotypical hypothesis). As we mentioned in section 3.1 \\u201cWe consider a hypothesis \\\"pro-stereotypical\\\" if it aligns with society's stereotype for an occupation, e.g. \\\"female nurse\\\" and anti-stereotypical if otherwise.\\u201d We understand and apologize for the confusion caused to the reviewer and have reframed the caption to be: entailment probability of the hypothesis aligning with the stereotype was higher than its counterpart \\u21d2 entailment probability of the pro-stereotypical hypothesis was higher than its counterpart. The stereotype corresponding to an occupation is based on the results of the US Current Population Survey (CPS 2019). \\u201cThe selected occupations range from being heavily dominated (with domination meaning greater than 70% share in a job distribution) or stereotyped by a gender, e.g. nurse, to those which have an approximately equal divide, e.g. designer.\\u201d A list of jobs with their corresponding gender domination can be found in Appendix A.3\\n\\n6. Definition of bias in figure 1: We apologize for the confusion caused and have added details regarding bias in section 3.3 \\u201cthe bias for the models (BERT, ROBERTa and BART) is the absolute difference between the entailment probabilities of two hypotheses. We compare this with CPS 2019 data where the difference between the gender distribution is used as the bias.\\u201d\\n\\n7. Calculation of del P in Table 6 is done to compare how metrics change when we consider male-dominated jobs (denoted by Male in the Table) vs female-dominated jobs(Female in the table) and calculate the metrics accordingly by finding the difference in predictions between two hypotheses..The gender distribution of the jobs is mentioned in Appendix A.3. We understand the reviewer\\u2019s confusion and so we\\u2019ve tried to clarify this in the paper.\", \"minor_correction\": \"In Table 2, teacher => guard -> fixed\\n[1] Kiritchenko, Svetlana and Saif M. Mohammad. \\u201cExamining Gender and Race Bias in Two Hundred Sentiment Analysis Systems.\\u201d *SEM@NAACL-HLT (2018).\\n[2]Rudinger, Rachel et al. \\u201cGender Bias in Coreference Resolution.\\u201d ArXiv abs/1804.09301 (2018): n. pag.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for their helpful suggestions and feedback. Below we try to provide a detailed response to their comments\\n\\n1. We have conducted two experiments to compare the performances based on the overlap between hypothesis and premise. Please refer to the response to R1 for the details. We've also updated the appendix with these experiments.\\n\\n2. We normalize over entailment and contradiction probabilities to \\u201cinvestigate if the model predicts the textual entailment to be \\\"definitely true\\\" or \\\"definitely false\\\".\\u201d The idea is to measure an upper bound of bias by investigating whether the model is confident of either entailment or contradiction since a neutral prediction would indicate an unbiased model. On average neutral was predicted 57% of times across our experiments. \\n\\n3. For the construction of the gender swap augmentation set, We identify the occupation-based entities from original training sets (MNLI, SNLI) and replace gender-specific words like \\u2018he\\u2019, \\u2018his\\u2019, \\u2018man\\u2019 etc with their opposite genders. As opposed to Zhao et al\\u2019s turking approach, we follow an automatic approach to perform this swapping thus our method can be extended to other datasets as well. We understand that this was not specified in the paper and so we have added this information in Section 4 of the paper. \\n\\n4. \\u201cThe results are split between Table 5 and Table 6, so I was unsure where it helped\\u201d: \\n\\nWe apologize for the confusion but the results from debiasing are mentioned in Table 7. \\u201cFrom the results in Table 7, we can see that performance on BERT with respect to bias has improved following the debiasing approach. The other two models, RoBERTa and BART, also show a slight improvement in performance with respect to most metrics.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"We thank the reviewer for helpful and detailed feedback. Below, we try to give a detailed response to the points raised by the reviewer:\\n\\n1.Addressing the concerns about the structure of the hypothesis, we conducted a few more experiments to see the variation in performance. Following are the two structures we considered to introduce an overlap between hypothesis and premise:\", \"experiment_1\": \"We introduce an overlap of one entity ( occupation ) in the premise. Templates used for the generation of hypothesis are shown below. Here gender corresponds to male or female such that \\\"A male profession, accountant is spoken of\\\".\\n\\nHypothesis\\n\\nA [gender] profession, [occupation], has been mentioned\\n\\nA [gender] profession, [occupation], is spoken of\\n\\nA [gender] profession, [occupation], is talked about\\n\\nResults\\n \\n\\n| | | | | SNLI | | | | MNLI | | | |\\n|---|--------|:-:|:-----:|:-----:|:-----:|:-:|:-----:|:-----:|:-----:|---|---|\\n| | | | S (%) | P | B (%) | | S (%) | P | B (%) | | |\\n| | BERT | | 76.42 | 25.7 | 48.26 | | 59.57 | 29.43 | 50.05 | | |\\n| | RoBERTa | | 74.21 | 25.86 | 50.05 | | 64.05 | 22.59 | 52.89 | | |\\n| | BART | | 61.84 | 31.34 | 49.94 | | 60.47 | 28.85 | 48.26 | | |\", \"experiment_2\": \"We introduce a 100% overlap by including the entire premise in the hypothesis. Templates used for generation of hypothesis are mentioned below. Here gender corresponds to male or female and premise refers to the entire Premise text such that \\\"Accountants are coming\\\" mentions a male profession.\\n\\t\\t\\nHypothesis\\n\\n[Premise], speaks of a [gender] profession\\n\\n[Premise], talks about a [gender] occupation\\n\\n[Premise], mentions a [gender] profession\\n\\nResults\\n\\n| | | | | SNLI | | | | MNLI | | | |\\n|---|--------|:-:|:-----:|:-----:|:-----:|:-:|:-----:|:-----:|:-----:|---|---|\\n| | | | S (%) | P | B (%) | | S (%) | P | B (%) | | |\\n| | BERT | | 76.42 | 25.7 | 48.26 | | 59.57 | 29.43 | 50.05 | | |\\n| | RoBERTa | | 74.21 | 25.86 | 50.05 | | 64.05 | 22.59 | 52.89 | | |\\n| | BART | | 61.84 | 31.34 | 49.94 | | 60.47 | 28.85 | 48.26 | | |\\n\\nThe table from both these experiments has been updated in Appendix. The results show a slight improvement in bias wrt BERT but our conjecture is that this could also be because of BERT\\u2019s performance due to spurious correlations since the majority of the pairs are predicted to be entailing[1]. However, a significant bias is still maintained for the three models. We also notice a slight increase in bias for MNLI, particularly when using BART as the language model.\\n\\n2. We normalize over entailment and contradiction probabilities to \\u201cinvestigate if the model predicts the textual entailment to be \\\"definitely true\\\" or \\\"definitely false\\\".\\u201d The idea is to measure an upper bound of bias by investigating whether the model is confident of either entailment or contradiction, since a neutral prediction would indicate an unbiased model. On an average neutral was predicted 57% of times across our experiments. \\n\\n3. \\\"The results in table 6 are interesting. It seems that the models \\\"memorize\\\" the distributional correlations between gender and jobs differently for men and women.\\\" Are there any conjectures about why this may be the case? We believe this is because of the high prevalence of male-dominated jobs (~70%) in the original MNLI/SNLI datasets. On analysis, it was also found that around 80% of sentences mentioning these jobs were associated with male pronouns and other male-specific words (e.g, man, boy etc.). On the contrary female jobs were almost equally associated with male and female-specific words.\\n\\n4.We thank the reviewer for their suggestion on the kind of plot used. We have updated the plot in figure 1 (as well as in the appendix) to be a bar plot accordingly.\\n\\n\\n[1] Tu, Lifu et al. \\u201cAn Empirical Study on Robustness to Spurious Correlations using Pre-trained Language Models.\\u201d Transactions of the Association for Computational Linguistics 8 (2020): 621-633.\"}",
"{\"title\": \"interesting work, but not novel\", \"review\": \"*Paper summary*: This paper proposes a method for measuring stereotypical associations about occupations that are associated with genders using the natural language inference task. The method involves setting up a NLI pair where the premise is a gender-neutral statement about an occupation, and the hypothesis is explicitly gender specific. The analysis shows that NLI models do incorporate stereotypes. The paper also investigates how to reduce this bias by data augmentation.\\n\\n*Review*: At a high level, the method proposed in this paper makes sense, but there is a critical problem in terms of novelty: the idea of using NLI to probe stereotypes is not new. In fact, nearly the same proposal outlined here is explored by Dev et al (2020), who additionally use the mechanism to probe for other kinds of stereotypes as well.\\n\\nThe hypothesis templates are interesting, but present a bit of a technical question. The hypothesis of the form \\\"This text talks about a female occupation\\\" refers to the *text* of the premise, rather than the *events* or *entities* in it. In other words, it talks about the form of the premise, rather than its meaning. Of course, there's nothing wrong with this, but it breaks a crucial assumptions about how the NLI data (in particular the SNLI data) was sourced: the events and entities in the hypothesis refer to the events and entities in the premise as much as possible. In contrast, the word \\\"text\\\" in the hypotheses constructed in this work refers to the entire text of the premise, and not its entities and events. It is not clear how this change affects model performance.\\n\\nOne way to fix the issue is to change the hypothesis templates to use the same (or similar) words as the premises, and replace the occupation word with a gendered word. (But doing so would make the work even closer to that of Dev et al 2020.)\\n\\nIt is not clear why the neutral label is removed and the problem is converted into a binary problem of deciding whether the hypothesis is entailed or contradicted. It seems that most of the hypotheses would actually be neutral, and a good model should allocate most of its probability mass to the neutral label. Why do we have to force a choice between entail and contradict, when a stereotype-free model would actually predict neutral?\\n\\nThe results in table 6 are interesting. It seems that the models \\\"memorize\\\" the distributional correlations between gender and jobs differently for men and women. Are there any conjectures about why this may be the case?\\n\\nIt is not clear whether the bias that is being measured is in the representation (i.e. the *BERT embeddings) or the task (i.e., the NLI data). The experiments suggest that the problem is perhaps in both. Previous work on stereotypes involving language has largely focused how they are encoded in the embeddings, and removing them. This paper seems to argue that the provenance of the stereotypes is the training data for the task. However, the final results suggest that this is not entirely the case, and the paper does say so in the section on debiasing. It may be worth posing the question about the source of the biases early on in the paper.\\n\\nSince the paper is talking about stereotypes in language technology, the authors should go over the work of Blodgett et al (2020) to better situate the motivations and outcomes of this work. Indeed, there should be a discussion in the paper about the cultural context and assumptions that are implicit in the measurements. (For example, is the definition of B based on an American context? Would the measures transfer to a different country/cultural perspective?)\\n\\n*Minor point*: The plot in figure 1 should not be a line plot because the horizontal axis is categorical. A bar chart would be a better fit (and would convey the point more clearly).\\n\\n*References*\\n\\n* Dev, Sunipa, Tao Li, Jeff M. Phillips, and Vivek Srikumar. \\\"On Measuring and Mitigating Biased Inferences of Word Embeddings.\\\" In AAAI, pp. 7659-7666. 2020.\\n \\n* Blodgett, Su Lin, Solon Barocas, Hal III Daum\\u00e9, and Hanna Wallach. \\\"Language (technology) is power: The need to be explicit about NLP harms.\\\" In Proceedings of the Annual Meeting of the Association for Computational Lingustics (ACL). 2020.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Data contribution on NLI but unclear measurements and mitigation efforts\", \"review\": \"The paper's main contribution is the construction of an NLI style dataset for evaluating whether systems training on MNLI/SNLI are gender biased with respect to occupations. Premises are mined from MNLI, SNLI, QNLI, and ANLI that contain occupation words and those premises are paired with one of three templates, paraphrasing, \\\"This text mentions a XXX occupation\\\" where XXX is either 'male' or 'female'. Such pairs are put through trained NLI systems, and their preference\\u00a0toward the male or female version of the templates is recorded. If a system favors the hypothesis in line with labor statistics overall, authors conclude it is biased w.r.t gender and occupations. 3 systems are evaluated, all are found biased, and then a data augmentation approach from previous work (gender swapping from Zhao's coref bias paper) is used for mitigation with mixed results.\", \"pros\": \"1. The introduction of an NLI\\u00a0+ occupational\\u00a0gender bias dataset is\\u00a0new.\\n2. Experiments on several NLI systems\", \"cons\": \"1. The data contribution seems small and somewhat unnatural. The hypothesis format seems extremely unnatural (being text referential). I wonder if such examples are out of domain for the trained NLI systems. The ground truth for the proposed examples seems to be neutral (occupations are neither male nor female), so I would like to know\\u00a0how often evaluated systems actually predict this.\\u00a0\\n2.The bias measurement forces the models to predict either entailment or contradiction, where in fact the ground truth answer, in my opinion, for the proposed NLI examples, is neutral. (the occupation is neither male nor female). For all we know, the models are correctly predicting that with high probability, but the measurement is forcing a renormalization between entailment and contradiction examples (Section 3.2).\\u00a0 This seems like it would be a problem for the \\\"delta P\\\" and \\\"B\\\" measurement.\\u00a0\\n3.The gender swapping experiment is good to\\u00a0see but seems largely ineffective and I found its presentation hard to follow. Zhao et al. had a turking process to make sure all entities in CONLL data were covered in the swaps. Were any such measures taken here to deal with new NLI data? How was the list of swaps constructed in this case? The results are split between Table 5 and Table 6, so I was unsure where it helped, or in the unexplained Figure 2.\\u00a0From Figure 2, it seems like it hurt in some cases.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"The research topic is interesting. However, I have some concern about their constructed evaluation dataset.\", \"review\": [\"Summary:\", \"In this paper, the authors design a method to evaluate the gender bias for natural language inference tasks. They construct an evaluation dataset that consists of (premise, female hypothesis, male hypothesis) and design three scores (inconsistent predictions, probability gap, and dominant probability) to measure the gender bias for BERT, RoBERTa, and BART. The experimental results show that those models indeed have a gender bias. They also show that the bias can be reduced by data augmentation.\", \"Gender bias is an interesting and important topic in the NLP domain. However, I have some concern about this paper:\", \"When constructing the evaluation dataset, the authors replace the occupation word in the premise with other occupation words. However, this can lead to inconsistent semantics. For example, \\\"the doctor is operating\\\" becomes \\\"the teacher is operating\\\", which may not fit the realistic situation.\", \"The constructed dataset contains only entailment pairs. How about analyzing the contradiction cases as well?\", \"When analyzing the results, the authors disregard the neutral case. I am wondering if they train the models in the same way. If not, it seems that there is a domain mismatch.\", \"This point is my primary concern. In the constructed dataset, the hypothesis is generated by templates and looks like the context is not very related to the premise. However, in most of NLI datasets, the premise and hypothesis are usually related. So the domain of training set and their constructed evaluation set are different. It can be reasonable for models to perform not well and have the bias on the evaluation set, since the domain changes a lot. Why not consider the existing hypothesis? For instance, replace \\\"he\\\" or \\\"his\\\" in the existing hypothesis with \\\"she\\\" or \\\"her\\\" so you can have female hypothesis and male hypothesis.\", \"I don't quite understand the definition of B for evaluation. Is that probability for some predefined gender-specific occupations? If that is the case, how to define those words?\", \"What is the definition of \\\"bias\\\" in Figure 1?\", \"In Figure 6, how to calculate delta P for female and male respectively? In my understanding, delta P is the probability gap between female and male hypothesis.\", \"I suggest that the authors use one table to show the difference before and after the data augmentation to compare the numbers more easily.\", \"Some typos\", \"In Table 2, teacher => guard.\"], \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
a2gqxKDvYys | Mind the Gap when Conditioning Amortised Inference in Sequential Latent-Variable Models | [
"Justin Bayer",
"Maximilian Soelch",
"Atanas Mirchev",
"Baris Kayalibay",
"Patrick van der Smagt"
] | Amortised inference enables scalable learning of sequential latent-variable models (LVMs) with the evidence lower bound (ELBO). In this setting, variational posteriors are often only partially conditioned. While the true posteriors depend, e.g., on the entire sequence of observations, approximate posteriors are only informed by past observations. This mimics the Bayesian filter---a mixture of smoothing posteriors. Yet, we show that the ELBO objective forces partially-conditioned amortised posteriors to approximate products of smoothing posteriors instead. Consequently, the learned generative model is compromised. We demonstrate these theoretical findings in three scenarios: traffic flow, handwritten digits, and aerial vehicle dynamics. Using fully-conditioned approximate posteriors, performance improves in terms of generative modelling and multi-step prediction. | [
"variational inference",
"state-space models",
"amortized inference",
"recurrent networks"
] | Accept (Poster) | https://openreview.net/pdf?id=a2gqxKDvYys | https://openreview.net/forum?id=a2gqxKDvYys | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"inddKreCrDq",
"n-xGnha8jyy",
"z2EveTG5Cjy",
"UfYS87An3Gd",
"g6Gai5ASnYq",
"PnNNT1t20PG",
"ZOeV3kdLIq5",
"jYWYmqr_Usk",
"3aW-a1yPE6x",
"PTVlliGXV3",
"YkO3GmSn5Q3",
"4SiYSFzX3r0",
"QO_JhhZfI9W",
"fd6aa0I-ns"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040501279,
1606172092238,
1606172064358,
1606137667478,
1605982100189,
1605541765922,
1605541718857,
1605541706583,
1605541657055,
1605541552513,
1603910011075,
1603818516740,
1603803406773,
1603488893163
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3587/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3587/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3587/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3587/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3587/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3587/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3587/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3587/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3587/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3587/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3587/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3587/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3587/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper studies how suboptimal conditioning sets create\\nsuboptimal variational approximations in variational inference with amortization in state space models. \\nWhile the point made about the role of the conditioning set is not a new one, the point was carried out further and \\nmore clearly in this paper than previous works. Addressing a couple of issues would\", \"make_the_paper_stronger\": \"- Really boiling down in the experiments to know for what models/data\\n the \\\"full\\\" approach would add value would provide concrete guidance\\n to the community.\\n\\n\\n- Notation choices in the paper are rough. For example, Appendix A.2\\n reads like a type mismatch since the w on the left is a function of\\n z but is also equal to a function of z and C.\\n\\n\\n- Adding a more detailed description of the complement of C in the\\n main text\"}",
"{\"title\": \"addendum\", \"comment\": \"I should add: I remain in favor of acceptance.\"}",
"{\"title\": \"Reply\", \"comment\": \"Regarding DKF: I think it does more than 'hint'- it plainly explains how to factorize the posterior. It may not do so in the experiments but the algorithmic contribution of a work can be and often is more than what accompanying experiments suggest.\\nI think the revision is however appropriate.\\n\\nI am not sure I am convinced by your argument regarding survivorship bias and novelty. I would insist the idea itself is not novel - it is pointed out by several authors (Fraccaro, Krishnan, and others); it should come naturally to any researcher working on VAEs and familiar with smoothing in state space models (e.g. the Baum Welch algorithm). Many of these authors who mention the methods report finding no benefit to it. Certainly, if someone had tried the idea (as I am sure many researchers have - I have personally discussed this with several researchers) on standard datasets and found clear success, they would have reported such success. If anything, I feel the survivorship bias hypothesis works against it.\\n\\nI would have preferred the paper to acknowledge more strongly the lineage of ideas, but more importantly, for the experiment section to be less of a 'we tried X and it works great' and more of 'let's understand in what type of datasets and with which type of model does X actually work better than Y?'. In other words, what insights should the reader ideally get when reading this paper? \\nI don't think it is as simple as 'smoothing is better than filtering'.\\n\\nI do however, agree, that the issue will vary across datasets, and some datasets will benefit more smoothing. It is possible that smoothing interfaces poorly with learned models (RNNs in particular), as RNNs tend to have limited memory, and ultimately the smoothing network will in most situations result in limited lookahead.\"}",
"{\"title\": \"reply to rebuttal\", \"comment\": \"Thank you for the excellent rebuttal.\\n\\nMy questions with regard to the comparison with the amortization gap defined by Cramer et al. have been answered properly. The same goes for the reflection of the narrow posterior in the experimental results. The choice of non-standard datasets still leaves open the question (as the authors say in their general reply) whether this paper overstates the problem or if previous papers have simply focused on datasets where these problems don't show up too much. Future research will hopefully clarify this.\\n\\nI have raised my score and would like to see this paper accepted.\"}",
"{\"title\": \"response to the authors\", \"comment\": \"Thank you for your interesting paper.\\nI have read the rebuttal and other reviews. I still find the paper to be both novel and important for the ICLR community. Although I'm not an expert in sequential latent-variables models, the authors clearly position their work by providing references and describing relevant models. Therefore, I do not share the concerns regarding its originality.\\n\\nAnswering the authors' question regarding the score raise. I sincerely believe that this submission should be accepted, and I'm inclined to raise the score in case it will affect the decision.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thank you for your positive review!\\n\\nWe will answer your specific questions here. Please also consider the general rebuttal given to all reviewers as a top-level reply to this submission\\u2019s thread.\\n\\n> It would be helpful to point practitioners to this related work, as one option to consider given that the KL divergence learns products of posteriors and this may not be a desirable feature of a divergence for an application.\\n\\nThe KL is motivated by the standard VI observation that argmax_q ELBO = argmin_q posterior KL. You raise a very interesting point here. We had not considered alternative objectives. We believe this is a promising avenue for future research. We added a short discussion to the manuscript.\\nWe also point you to our discussion of originality and anticipated impact in the reply to all reviews. \\n\\nWe would love to hear how you think the paper can be improved to a clear accept in your opinion.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thank you for your positive review. As per your suggestion, we now added a discussion w.r.t. the related work you pointed out. Beyond that, we would much appreciate detailed feedback as to the necessary improvements that would merit a \\\"clear accept\\\" rather than a \\\"good paper\\\" review.\\nPlease also consider the general rebuttal given to all reviewers as a top-level reply to this submission\\u2019s thread.\"}",
"{\"title\": \"Individual Response\", \"comment\": \"Thank you for your detailed and constructive review. We are happy that you found our exposition clear and interesting, and that you consider the experiments to back up our findings quantitatively.\\n\\nWe will answer your specific questions here. Please also consider the general rebuttal given to all reviewers as a top-level reply to this submission\\u2019s thread.\\n\\n> It is not clear to me why these two gaps are distinct/independent\\n\\nThe short answer is that the approximation gap is defined on a per-sample basis, while the conditioning gap is derived for all samples, see eq. (6) in section 3.1 for the definition. For a single sample, the conditioning gap does not exist. It only emerges as soon as inference has to compromise over many samples. The effect of that compromise on the ELBO is the conditioning gap.\\n\\nAs you correctly say, the learning can be hampered by errors of the inference network (amortisation gap) or missing inputs (conditioning gap). From this practical lens, both could be viewed as two sides of the same medal. This is a wide definition of the amortisation gap encompassing everything that makes the inference network miss q*. \\n\\nWe cannot arrive at an equation such as \\u201camortisation gap = conditioning gap + something\\u201d, because the LHS is per-sample and the RHS involves an expectation over all samples. This would require a redefinition of the amortisation gap. \\n\\nThe amortisation gap measures how much the neural net deviates from the mathematically optimal solution. The conditioning gap is different. The problem is neither the network capacity, nor a particular sample x, it\\u2019s the foul compromise between all samples sharing the same C, but not necessarily the same ~C. The conditioning gap measures how much the shared optimal solution misses all the individual optimal solutions. Unlike with the amortisation gap, the target distribution has changed, so that even if the neural network adheres perfectly, the result is not desirable. \\n\\nWe thus face two distinct phenomena, and it is worth studying them separately, as they require different remedies.\\n\\n> In the example of the univariate gaussian in section 3.2, where would the amortisation gap from Cramer et al. fit in as a separate gap?\\n\\nFor *any x*, we can get a better q than omega. But not for *all x* of them at the same time.\\nThe q's are coupled if they share the same C, even though they differ in ~C. Hence, improving it for one ~C will make it worse for others. Notice that there is no amortisation gap here, because we can write down the solution in closed form, that is w_a(z). \\n> The authors draw conclusions from these plots that I can\\u2019t confirm by looking at the plots.\\n\\nTo convince you otherwise, we would like to draw your attention to the figure showing the prefix sampling for traffic flow, figure 4. \\n\\n\\nThe phenomenon is most clear in the second column.\\nHere the fully conditioned model supports, roughly speaking, two hypotheses: \\none of a traffic jam, where the speed drops, and,\\none without a traffic jam, where the speed stays constant. \\nThe semi-conditioned model does so as well, although to a lesser degree.\\n\\nBeing able to maintain several qualitatively different hypotheses of the future is essential to stochastic models. We can clearly see that conditioning more helps with that. The other columns show\\u2013more or less\\u2013the same. We did not cherry-pick those plots.\\n\\nMind also that the plots are backed up by quantitative evaluations.\\n\\n> problems need to be cherry picked\\n\\nWe address this in the reply to all reviewers.\\n\\n> I would expect more results on the influence of the sneak-peak parameter k.\\n\\nWe acknowledge that this could be an interesting ablation study. We did not consider it central to the contribution of the paper and thus spared it.\\n\\n> Are you using statically binarized mnist or dynamically binarized mnist?\\n\\nWe binarized the data before training in a consistent matter over all experiments.\"}",
"{\"title\": \"Reply\", \"comment\": \"Thank you for your positive and constructive review.\\n\\nYou mention a performance gap between state-space and auto-regressive models. Advocating for or against any of these was beyond our scope for this submission. We believe both have their merits depending on the application. Notably, SSMs are still a wide-spread tool in the engineering disciplines. If anything, we want to understand if our findings can explain some of the performance gap, but by no means claim to close it.\\n\\nRegarding DKF, you are right in pointing out that eq. 3 hints at a faithful posterior approximation. Yet, all four models in section 5.2 drop at least z_t-1. We have updated our overview table accordingly. We have further added the two publications pointed out to you as related work in the appropriate parts of the paper.\\n\\nWe further point you at our reply to all reviewers, where we clarify our contribution in relation to common knowledge in the field.\"}",
"{\"title\": \"General reply\", \"comment\": \"We would like to thank the reviewers for their thoughtful and constructive feedback. Our response will first tackle two general points raised in several reviews before briefly addressing each of your reviews individually. You can find the specific rebuttals as responses to your reviews. We also invite you to read the rebuttals given to your fellow reviewers.\\n\\n\\n**Originality and Contribution**\\n\\nWe extend on where we see our contribution vs. what is commonly known in the literature. \\n\\nWe acknowledge that others have pointed out a discrepancy between the true posterior and the approximate posteriors typically used in the literature. Similarly, the approximation gap and the amortisation gap are common knowledge. In combination, it may indeed seem somewhat obvious that partial conditioning leads to suboptimal inference.\\n\\nAt the same time, we are not aware that partial conditioning has been questioned anywhere in the literature to a larger degree than, e.g., the choice of variational family. In fact, model names like \\u201cDeep Kalman Filters\\u201d or \\u201cDeep Variational Bayes Filters\\u201d imply that (approximate) Bayesian filters are learned. Our work shows that this is incorrect---a stronger result than expectable inference suboptimality that is apparently not obvious. In this light, we want to push back on the notion that we were merely fleshing out the obvious.\\n\\nOur contribution is to go beyond a demonstration of expectable inference suboptimality. We provide theoretical and empirical evidence as well as strong intuitions for partial conditioning. It is as important as the choice of variational family or architecture of the inference network. Neither a more expressive variational family nor a more powerful network architecture can fix the problem! We want to raise awareness of this issue among researchers and hence explicitly provide intuitions to guide future design.\\n\\n\\n**Choice of Data Sets**\\n\\nSome of you have inquired about our choice of non-standard data sets, since previous work saw little benefits of smoothing variants over filtering counter parts.\\n\\nThe presence of a conditioning gap is a property of the system that generated the data. In section 3.3., we detail two common cases where it does not surface. There is good reason to believe that previous papers focus on such data sets, as we have argued in the paper.\\n\\nWhy are the cases where smoothing is only as good as filtering so common in the literature then? One reason might be that we overstate the problem and it is not that dramatic at all. Another might just be survivorship bias: papers that show bad results are less likely to be published. Due to its tendency to focus on partially-conditioned approximate posteriors, the community has also focused on data sets where this has little effect. We study three quite diverse data sets where the problem is present, and believe each of them to be as relevant as previously studied data for benchmarking variational sequence models.\\n\\n\\n**Summary of changes**\", \"we_changed_the_submission_in_the_following_places\": [\"We added a clarification to our contribution in the introduction, based on your feedback.\", \"We added a paragraph about alternative divergences as pointed out by reviewer 1.\", \"We updated the definition of the conditioning gap, eq. (6) and surrounding sentence, to be properly defined for a data set rather than a subset.\", \"We added missing literature that was pointed out by several reviewers.\", \"Added reference to \\u201cVariational Autoencoder with Arbitrary Conditioning\\u201d by Ivanov, Oleg and Figurnov, Michael and Vetrov, Dmitry P.\", \"Added reference to \\u201cPhoto-Realistic Single Image Super-Resolution Using a Generative Adversarial Network\\u201d by Ledig et al.\", \"Added reference to \\u201cLearning and Querying Fast Generative Models for Reinforcement Learning\\u201d by Buesing et al.\", \"Added reference to \\u201cTemporal Difference Variational Auto-Encoder\\u201d by Gregor et al.\", \"We have addressed all your \\u201cminor\\u201d comments.\", \"We further included minor cosmetic changes for increased readability.\", \"We added comment on binarisation of MNIST.\"]}",
"{\"title\": \"Review\", \"review\": \"The paper reviews the issue of partial conditioning of the amortized posterior in sequential latent variable models, typically state-space models trained with a VAE-style loss, but where the posterior used is the filtering rather than smoothing posterior. The author show that training a model with posterior with missing information can lead to a gap in estimating both the posterior and the corresponding model. They show the benefits of using the correct posteriors in simple examples.\\n\\nOverall, the paper is well written, but its originality is on the low end; a large number of papers describing state-space VAE like models make explicit that the filtering posterior is technically suboptimal compared to the smoothing one; few see benefits in actually using the smoothing posterior (note that [1] derives a valid ELBO using only a one-step smoothing update). The derivation of the shared approximate posterior is standard variational inference derivations, but it is nice to see it explicitly written, and contrasted with the optimal posterior. The paper frequently gives nice intuitions behind various facts (mixture vs product of expert and the gating effect, when is an imperfectly conditioned expert enough, etc.). \\n\\nThe univariate Gaussian example is a good toy problem to understand the issue at hand. The numerical examples are selected to highlight the benefits of smoothing; they are interesting but perhaps not particularly challenging or surprising, and in relatively short sequences it makes sense that peeking or smoothing would benefit over filtering. \\n \\n\\nOverall, I am still inclined to accept the paper, as it investigates more clearly and makes more explicit knowledge that is usually treated as folklore and footnotes in other papers. \\u02c6It does not really offer methods for making smoothing posteriors actually learn more powerful models than filtering posteriors on complex datasets, nor does it technically demonstrate that SOTA state-space models have their performance limited through the information of the posterior (note that state-space models still typically underperform autoregressive ones).\", \"minor\": \"-Table 1: The Deep Kalman Filter, in some of its instantiations, has an empty \\\\bar{C_t}; they explicitly condition z_t on z_tm1 and x_\\\\geq t (see equation 3). See also [2] for another state space model that considers both the smoothing and filtering posteriors (as others, they note no benefit to the smoothing posterior).\\n\\n-Equation 5: Technically the left hand side should be the function w, not the particular value w(z) (note z is a bound variable on one side of the equation and bound on the other side). I understand what the authors mean, but it looks strange as it is.\\n\\n\\n[1] Gregor et. al, \\\"Temporal Difference VAE\\\"\\n[2] Buesing et al., \\\"Learning and Querying Fast Generative Models for Reinforcement Learning\\\"\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"review\", \"review\": \"**Summary**\\nThis paper investigates the effect of partial conditioning on amortized inference in variational auto-encoders, focusing specifically on sequential data sources where it is common practice to have a posterior that is factorized in such a way that conditioning is partial (usually only conditioning on past signals in the sequence). Given a true posterior that is conditioned on the entire observed datapoint, the authors discuss the effect of having an approximate posterior that is only conditioned on part of the input. As the approximate posterior cannot adapt to the part of the input that is left out of the conditioning, the evidence lower bound becomes less tight, due to the larger KL divergence between the approximate posterior and the true posterior. The authors compare this to the work by Cramer et al. [1], where the distinction was made between having a restricted family of possible distributions for the approximate posterior (approximation gap) and the gap between an amortized approximate posterior with an inference network shared for all datapoints and a non-amortized approximate posterior that is optimized for each datapoint separately (amortisation gap). They argue that partial conditioning leads to a third type of gap which is distinct of the aforementioned inference gaps. Through an example with discrete observations the authors derive that when the true posterior is conditioned on the full data, and the approximate posterior is only partially conditioned, the optimal approximate posterior is something akin to a product of true posteriors over the unconditioned information, and not a mixture where the left out information is marginalized out. Through a 1D example they show that this could lead to overly sharp posteriors that have high densities in regions where the true posterior has very low density. \\nAs the authors also state, several studies have shown that full conditioning on future observations results in negligible performance gains. However, the authors conjecture that this is because those results were mainly found on problems where the conditioning issue was not (or less) relevant.\", \"the_authors_demonstrate_potential_performance_gains_on_3_datasets_where_conditioning_on_future_information_could_be_helpful\": [\"unmanned aerial vehicle trajectories from the Blackbird dataset, a sequential version of MNIST where the rows of a picture correspond to sequential observations, and a selection of a traffic flow dataset. They perform log likelihood estimates and prefix-sampling to determine the effect of conditioning (partially or fully) on future observations.\", \"**Pros**\", \"The idea behind the effect of conditioning on the amortization procedure is clearly explained.\", \"The exposition that explains that the optimal approximate posterior could correspond to something akin to a product of distributions instead of a mixture is interesting.\", \"On the datasets that were selected by the authors, the benefit of full conditioning versus partial conditioning is visible in the quantitative results (log likelihood estimates).\", \"**Cons**\", \"The authors argue that the conditioning gap is a distinct gap from the amortization gap that was discussed by Cramer et al. [1]. It is not clear to me why these two gaps are distinct/independent, I would say the conditioning gap is part of the amortization gap introduced by Cramer et al. since the way that conditioning is handled in amortized inference is essential to the gap between the amortized and non-amortized approximate posterior. For instance, in the example of the univariate gaussian in section 3.2, where would the amortization gap from Cramer et al. fit in as a separate gap? The amortization gap can be large because the conditioning is incomplete, or because of the limited flexibility of the neural network mapping from conditioning to parameters. Such a limited flexibility could reach the same type of error as partial conditioning.\", \"The effect of the narrow posteriors for partially conditioned approximate posteriors due to it being a product of distributions and not a mixture is not clear in the experiments, even though the authors do hint that this is observed in the qualitative prefix sampling experiments. The sample prefix experiments are furthermore very hard to judge, especially for the traffic flow dataset. The authors draw conclusions from these plots that I can\\u2019t confirm by looking at the plots. For instance, with respect to the traffic flow samples the authors state that \\u201cthe partially conditioned model concentrates too much\\u2026\\u201d. It seems to me like it concentrates about the same amount as the full model, and I don't see how it\\u2019s \\u201ctoo much\\u201d as the dashed line is usually among the predictions. I think the authors are incentivized to find this conclusion because they try to argue in figure 1 that products of distributions concentrate too much argument.\", \"As the authors state, previous work showed that on popular datasets conditioning on future information has little gains. Even though the authors find datasets where gains can be made, these datasets are not incredibly convincing that this is actually a widespread problem for sequential VAEs. The lack of overlap between datasets that related work is evaluated on (such as the gym or mujoco datasets or natural speech waveform datasets and polyphonic music datasets) and datasets that this work is evaluated on and the fact that the datasets of this paper are particularly small or artificial makes me a little concerned that the problems need to be cherry picked for the proposed conditioning to have an actual effect. For instance, although the MNIST example is an obvious example where conditioning on future information could help, it is fairly artificial. The authors argue that the gym and mujoco datasets have deterministic dynamics (and therefore shouldn\\u2019t suffer from partial conditioning), but do not explain why waveform datasets or polyphonic music datasets are not suitable to study this problem. Together with the fact that related work is discussed but not compared against empirically, this makes it hard to place this work in context with related work and to judge its relevance.\", \"I would expect more results on the influence of the sneak-peak parameter k. On the traffic flow dataset the authors suggest that the full model (with largest possible k) can perform worse on the test set than a model with intermediate k because the intermediate-k model already contains sufficient future information. This could be investigated if results were compared for models with more values of k, and a leveling off of performance gains for increasing k could confirm this conjecture.\", \"**Minor comments**\", \"In section 5 there is a lot of referring to section 5 itself in the middle of sentences, which breaks the flow and seems unnecessary. See for instance the first paragraph of section 5.\", \"Are you using statically binarized mnist or dynamically binarized mnist?\", \"[1] Cramer et al. inference suboptimality in variational autoencoders. https://arxiv.org/abs/1801.03558\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"neat and useful theoretical result supported with practical examples\", \"review\": \"It is hard to write a useful review for this paper since the authors clearly have thought through many aspects of their work. I find this paper to be a useful piece, both theoretically and practically.\\n\\nMy only suggestion would be to include several works into consideration. The problem of partial observability is also important for generative image models [1,2]. I don't propose to perform a model comparison, but I think the reader would benefit if you could relate your result with similar works from CV. For instance, why other models avoid this degenerate solution while still conditioning partially. Or how one can possibly escape difficulties when full conditioning is not possible on the test stage.\", \"minor_comments\": \"1. page 1, section 2.1. I think by \\\"minimization of eq. 1\\\" the authors mean maximization of the marginal likelihood.\", \"references\": \"1. Ivanov, Oleg, Michael Figurnov, and Dmitry Vetrov. \\\"Variational autoencoder with arbitrary conditioning.\\\" arXiv preprint arXiv:1806.02382 (2018).\\n2. Ledig, Christian, Lucas Theis, Ferenc Husz\\u00e1r, Jose Caballero, Andrew Cunningham, Alejandro Acosta, Andrew Aitken et al. \\\"Photo-realistic single image super-resolution using a generative adversarial network.\\\" In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4681-4690. 2017.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Good contribution\", \"review\": \"I enjoyed this paper, and think it provides a valuable contribution to sequental latent variable modeling of time series data.\\n\\nSpecifically, this paper addresses the issue of conditioning in using variational inference to fit sequential latent variable models to data. In addition to potential errors from an amortisation gap or approximation gap, a conditioning gap is identified, where a variational distribution that is not conditioned on all possible information (previous timesteps' observations and latent variables) underperforms. \\n\\nThis seems like an 'obvious' insight, but I think that is a strength of this paper. It clearly shows why previous work falls short of using all available information to get good performance, through a simple theoretical analysis. Further, empirically the work demonstrates how to correct for the conditioning gap.\\n\\nI anticipate that through the publication of this paper at ICLR, the authors of future papers in this area will need to be careful in conditioning. This will benefit the research community as a whole, and lead to higher-quality variational approximations and papers.\", \"one_nit\": [\"although the optimal variational approximation may not be ideal in the theoretical study here, in Section 3.1, I think a discussion of why the KL divergence was used in this study is warranted. For example, there are other divergence measures that do not suffer from the issues presented here (c.f. https://dataspace.princeton.edu/handle/88435/dsp01pr76f608w). It would be helpful to point practitioners to this related work, as one option to consider given that the KL divergence learns products of posteriors and this may not be a desirable feature of a divergence for an application.\"], \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
SK7A5pdrgov | CausalWorld: A Robotic Manipulation Benchmark for Causal Structure and Transfer Learning | [
"Ossama Ahmed",
"Frederik Träuble",
"Anirudh Goyal",
"Alexander Neitz",
"Manuel Wuthrich",
"Yoshua Bengio",
"Bernhard Schölkopf",
"Stefan Bauer"
] | Despite recent successes of reinforcement learning (RL), it remains a challenge for agents to transfer learned skills to related environments. To facilitate research addressing this problem, we proposeCausalWorld, a benchmark for causal structure and transfer learning in a robotic manipulation environment. The environment is a simulation of an open-source robotic platform, hence offering the possibility of sim-to-real transfer. Tasks consist of constructing 3D shapes from a set of blocks - inspired by how children learn to build complex structures. The key strength of CausalWorld is that it provides a combinatorial family of such tasks with common causal structure and underlying factors (including, e.g., robot and object masses, colors, sizes). The user (or the agent) may intervene on all causal variables, which allows for fine-grained control over how similar different tasks (or task distributions) are. One can thus easily define training and evaluation distributions of a desired difficulty level, targeting a specific form of generalization (e.g., only changes in appearance or object mass). Further, this common parametrization facilitates defining curricula by interpolating between an initial and a target task. While users may define their own task distributions, we present eight meaningful distributions as concrete benchmarks, ranging from simple to very challenging, all of which require long-horizon planning as well as precise low-level motor control. Finally, we provide baseline results for a subset of these tasks on distinct training curricula and corresponding evaluation protocols, verifying the feasibility of the tasks in this benchmark. | [
"reinforcement learning",
"transfer learning",
"sim2real transfer",
"domain adaptation",
"causality",
"generalization",
"robotics"
] | Accept (Poster) | https://openreview.net/pdf?id=SK7A5pdrgov | https://openreview.net/forum?id=SK7A5pdrgov | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"7zBpRm5j6f",
"SPdPS9f8kyP",
"LFC6eEwTeP2",
"4a0KfkfNir-",
"d_8IO6fpEmf",
"p4QFocuya_o",
"4XSuvYDIsAL",
"M6ZN6Nx12tI",
"GPVvWuU8F9b",
"WwNMgKHdl9Q",
"zHZxh-MjTd",
"YHZDS3tVq_p",
"SCbQMONAnMF",
"UXVrXE8R6XG",
"BGIPmQiy-TW"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040508138,
1606139823602,
1605940229853,
1605798477457,
1605798291271,
1605566028047,
1605565542979,
1605565087725,
1605564739400,
1605317102285,
1605316314611,
1604940029309,
1604607815347,
1603906078195,
1603773643239
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3586/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3586/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3586/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3586/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3586/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3586/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3586/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3586/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3586/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3586/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3586/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3586/AnonReviewer5"
],
[
"ICLR.cc/2021/Conference/Paper3586/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3586/AnonReviewer2"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"CausalWorld is a benchmark for robotic manipulation to address transfer and structural learning. The benchmark includes (i) a variety of tasks (picking, pushing, tower, etc) relating to manipulating blocks, (ii) configurable properties for environments (properties of blocks, gravity, etc), (iii) customizable learning settings involving intervention actors, which can change the environment to induce a curriculum.\\n\\nThe reviewers found the paper compelling and with many strengths, including \\u2018interesting and important ideas\\u2019 (R4), \\u2018simple API with a standardized interface\\u2019 for \\u2018procedural generation of goals\\u2019 (R5), \\u2018strongly motivated and tackles a real and practical problem\\u2019 (R3), and \\u2018benchmark with many good properties\\u2019 (R2). By and large, the reviewers agree that the paper presents an important benchmark satisfying several desiderata, which I certainly agree with. \\n\\nOn the other hand, most of the reviewers (3 out of 4) also raised serious concerns, more prominently, about the experimental results and the causal inference component. For instance, R5 commented that \\u201call the SOTA algorithms fail,\\u201d and it is hard to quantify how agents would perform well in different tasks. R3 pointed out the lack of \\u201cqualitative results exploring the relationship between the identified and proposed causal variables,\\u201d emphasizing that \\u2018the benchmark is well-motivated, but not backed up with strong experimental results.\\u2018\\u2019 R2 identified the lack of clear causal component in the paper while the paper mentions \\u201copportunity to investigate causality\\u201d and \\u201cunderlying structural causal model (SCM).\\u201d All in all, these are valid concerns.\\n\\nThe authors' rebuttal\\u00a0was quite detailed,\\u00a0and appreciated, but left some important questions unanswered. The first and critical issue is about the causal nature of the simulator. The simulator's name is \\\"causalworld\\\" and its stated goal is to provide \\\"a benchmark for causal structure and transfer learning in a robotic manipulation environment.\\\" Also, the first bullet in the list of contributions is: \\\"We propose CausalWorld, a new benchmark comprising a parametrized family of robotic manipulation environments for advancing out-of-distribution generalization and causal structure learning in RL.\\\" After reading the paper, I was quite surprised to realize there is no *single* example of a causal model, in any shape or form (e.g., SCM, DAG, Physics) or a structural learning benchmark. In other words, there is a serious, somewhat nontrivial gap between the claimed contributions and what was realized in the paper. One way to address this issue would be to make the causality more explicit in the paper, for example, by sharing the underlying structural causal model, how variables form causal relationships, what causal structures are being learned, and how these learned structures compare with the ground truth. I think these would be reasonable expectations of a simulator that aims to disentangle the causal aspect of the learning process. \\n\\nThe second issue is about the experimental results in terms of generalizability. The authors emphasized on different occasions that \\\"The primary goal of this work is to provide the tools to build and evaluate generalizable agents in a more systematic fashion, rather than building generalizable agents for the tasks specified,\\\" or \\\"the experiments is to showcase the flexibility regarding curricula and performance evaluation schemes offered with CausalWorld, rather than solving new tasks or proposing new algorithms.\\\" These responses are somewhat not satisfactory given that the goal of the paper is providing tools to build generalizable agents, while the authors seem to suggest they are not committed to actually building such agents. Specifically, the experiments did not demonstrate the simulator as a benchmark but only showcased its flexibility (i.e., offering a large number of degrees of freedom). One suggestion would be to evaluate how algorithms (agents) with varying degrees of \\\"generalizability\\\" power perform across tasks with various difficulty levels. As it currently stands, the tasks are too easy or too hard for the standard, uncategorized algorithms, which makes it difficult to learn any lessons from running something in the simulator. \\n\\nLastly, I should mention that the work has a great potential to introduce causal concepts and causal reasoning to robotics, there is a natural and compelling educational component here. Still, the complete absence of *any* discussion of causality and the current literature results hurt this connection and the realization of this noble goal. I believe that after reading the paper, the regular causal inference researcher will not be able to understand what assumptions and types of challenges are entailed by this paper and robotics research. On the other hand, the robotics researcher will not be able to understand what a causal model is and the tools currently available in causal reasoning that may be able to help solve the practical challenges of robotics. In other words, this is a huge missed opportunity since there is a complementary nature of what the paper is trying to do in robotics and the results available in causal inference. I believe readers expect and would benefit from having this connection clearly articulated and realized in a more explicit fashion.\\n\\nIf the issues listed above are addressed, I believe the paper can be a game-changer in understanding and investigating robotics & causality. Given the aforementioned potential and reasons, I recommend the paper's acceptance *under the assumption that* the authors will take the constructive feedback provided in this meta-review into account and revise the manuscript accordingly.\"}",
"{\"title\": \"Experiments Clarification\", \"comment\": \"Once again, we thank the reviewer for their feedback and valuable comments that will help us improve this paper.\\n\\n\\u201cWhat confuses me in Figure 4 is that \\\"full randomization\\\" should generalize better, but the result only shows that \\\"full generalization\\\" doesn't learn. With this result, it is hard to disentangle issues of \\\"curriculum\\\" for learning training distribution well, and issues of \\\"training distribution\\\" for generalizing to testing distribution well.\\u201d\\n\\n\\u2192 We argue that by explicitly defining different spaces for each of the exposed variables and subsequently designing evaluation protocols that test agent\\u2019s performance across both spaces, we take a step towards disentangling issues related to the curriculum\\u2019s in-distribution generalization and out-of-distribution generalization.\\n\\n We agree with the reviewer that when training with curriculum 2 (extreme random randomization), the policy is expected to generalize better. However, the primary insight about the generalization on test distribution w.r.t train distribution (curriculum) can be drawn from the first two curriculums with their respective evaluations. The results from the extreme full randomization curriculum (curriculum 2 ), however, are obscured by the unknown confounder: how much compute one would need to solve them and if this is possible for the MLP policies used in this work; given that not all randomized variables are observed. \\n\\nAdditionally, our results highlight that it is not straightforward to study why curriculum 2 is not generalizing as expected without solving other important challenges in RL, e.g. sample efficiency.\\n\\n\\u201cI think what Figure 4 suggests might be that someone may design a train / test distribution split to test generalization for a particular aspect of the environment, but this attempt may completely fail just because they don't have a good curriculum to train on the training distribution.\\u201d\\n\\n\\u2192 We fully agree that the explanation made by the reviewer is correct. This is why we believe that CausalWorld offers a broad set of tools by customizing evaluation protocols to investigate these failure modes. We hoped that the evaluation protocols we reported present these many different insights. Future experiments and study can define custom protocols that might be better suited for their setting. \\n\\n\\u201cMy comment \\u201cOne concern I have about this benchmark is that the training difficulty ....\\\" was pointing out the difficulty of disentangling \\\"learning difficulty\\\" and \\\"generalization difficulty\\\", and the feedback didn't fully resolve my concern.\\u201d\\n\\n\\u2192 Indeed it's difficult to disentangle the \\u201clearning difficulty\\u201d and the \\u201cgeneralization difficulty\\u201d completely since they are both tightly coupled. What we attempt here is to measure both independently such that investigations towards optimal curricula that generalize well could be made.\\n\\n\\\"I\\u2019m curious whether it is possible to disentangle \\\"learning difficulty\\\" and \\\"generalization difficulty\\\" and separately measure them. When someone studies \\\"curriculum\\\", it is to resolve \\\"learning difficulty\\\". If someone tried to study \\\"generalization difficulty\\\", at least we should make sure that learning on training distribution should be doable\\\"\\n\\n\\u2192 We believe that an agent\\u2019s performance on a testing distribution will always be coupled to the training distribution. We welcome any suggestions to achieve such disentanglement and would be happy to include it in the benchmark. The benchmark currently offers full flexibility in defining different curricula and evaluation protocols. \\n\\nThe experiments are chosen here to primarily showcase the flexibility regarding curricula and performance evaluation schemes offered with CausalWorld.\\n\\nIf there are any other concerns of yours which we have not addressed, we'd be happy to clarify them further.\"}",
"{\"title\": \"I'm confused about the experiment too.\", \"comment\": \"Thanks to the authors for the detailed comments about the questions raised in the review.\\n\\nNow I have a better understanding of the environment and I still believe that this flexible environment would benefit the field.\\nHowever, I agree with other reviewers that it is hard to understand why Figure 4 and Figure 5 matter and how they help to understand the main motivation behind this new environment design.\\n\\nWhat confuses me in Figure 4 is that \\\"full randomization\\\" should generalize better, but the result only shows that \\\"full generalization\\\" doesn't learn. With this result, it is hard to disentangle issues of \\\"curriculum\\\" for learning training distribution well, and issues of \\\"training distribution\\\" for generalizing to testing distribution well.\\n\\nI think what Figure 4 suggests might be that someone may design a train / test distribution split to test generalization for a particular aspect of the environment, but this attempt may completely fail just because they don't have a good curriculum to train on the training distribution. My comment \\u201cOne concern I have about this benchmark is that the training difficulty ....\\\" was pointing out the difficulty of disentangling \\\"learning difficulty\\\" and \\\"generalization difficulty\\\", and the feedback didn't fully resolve my concern.\\n\\nI'm curious whether it is possible to disentangle \\\"learning difficulty\\\" and \\\"generalization difficulty\\\" and separately measuring them. When someone studies \\\"curriculum\\\", it is to resolve \\\"learning difficultY\\\". If someone tried to study \\\"generalization difficulty\\\", at least we should make sure that learning on training distribution should be doable.\"}",
"{\"title\": \"Anything else you'd like us to respond to?\", \"comment\": \"Once again, we thank the reviewer for their feedback and valuable comments that will help us improve this paper.\\n\\nSince the first phase of response period is over, if you have time and could indicate if there are any other concerns of yours which we have not addressed, we'd be happy to take a look.\\n\\nThanks for your time.\"}",
"{\"title\": \"Anything else you'd like us to respond to?\", \"comment\": \"Once again, we thank the reviewer for their feedback and valuable comments that will help us improve this paper.\\n\\nSince the first phase of response period is over, if you have time and could indicate if there are any other concerns of yours which we have not addressed, we'd be happy to take a look.\\n\\nThanks for your time.\"}",
"{\"title\": \"General Clarification\", \"comment\": \"We want to thank all the reviewers for their time spent and useful comments that will help us improve this paper!\", \"a_clarification_on_our_choice_of_experiments\": \"our intention with the experiments is to showcase the flexibility regarding curricula and performance evaluation schemes offered with CausalWorld, rather than solving new tasks or proposing new algorithms.\\n\\nThe space of possible environments which can be created using CausalWorld is very large. Hence there are a myriad of interesting experiments and curricula one could set up, here we can only give some examples: The training curricula are chosen to show the two extremes of (0) a completely static environment and (2) all environment variables being randomized after each episode reset and some intermediate curriculum (1). In Fig 5 we show how CausalWorld can be used to evaluate performance of agents in a differentiated manner: Rather than just computing a single score, we can assess generalization ability with respect to changes in individual (or groups of) parameters and how it relates to different training curricula.\"}",
"{\"title\": \"Reviewer2 Response - Thanks for your feedback\", \"comment\": \"We thank the reviewer for the feedback which will allow us to improve the paper!\\n\\n\\u201cOne concern I have about this benchmark is that the training difficulty may hinder the analysis of generalization. It is expected that a rich training distribution would lead to better generalization, but it seems rich training distribution makes training difficult and results in worse performance on evaluation distribution\\u201d\\n\\n\\u2192 Our intention with the experiments is to showcase the flexibility regarding curricula and performance evaluation schemes offered with CausalWorld, rather than solving new tasks or proposing new algorithms. The space of possible environments which can be created using CausalWorld is very large. Hence there are a myriad of interesting experiments and curricula one could set up, such that we can only give some examples:\\n\\nThe training curricula are chosen to show the two extremes of (0) a completely static environment and (2) all environment variables being randomized after each episode reset (which is indeed expected to be a hard and difficult setting to learn skills in) and some intermediate curriculum (1). In the same way, one can easily define other curriculums that can represent less difficult training distributions to begin with. In Fig 5 we show how CausalWorld can be used to evaluate performance of agents in a differentiated manner: Rather than just computing a single score, we can assess generalization ability with respect to changes in individual (or groups of) parameters and how it relates to different training curricula.\\n\\n\\u201cAs far as I understand, there are a few predefined tasks, and each task distribution cannot be intervened by modifying some task-relevant parameters. However, I can imagine parameterizing the tasks. For example, we can parameterize push task distribution by introducing a range of (x, y) position of block initial positions, and introducing a range of distance from initial position to goal position. If I have misunderstood the detail of the task parameterization, please elaborate on this point.\\u201d\\n\\n\\u2192 We fully agree that intervening on the task-relevant parameters and parameterizing each task distribution with a range of values are indeed helpful in such a benchmark and actually that's exactly what CausalWorld offers. \\n\\nA full description is provided on page 5 under \\u201cTraining and Evaluation Spaces\\u201d and a subset of the parameterizations regarding each task distribution as well as the interventions allowed are defined in table 2 in the appendix.\\nE.g. for the mass of a block across all tasks, space A is [0.015, 0.045] and space B is [0.045, 0.1]; the split was chosen to make sure that the tasks are feasible in both spaces. A subset of the full list is defined in table 2 in the appendix.\\n\\nIf we take pushing as an example (see Fig 5): Agents are trained with interventions on the goal pose in space A (curriculum 1) and end up interpolating to different goal poses coming from the same distribution as shown in the results for P5 (as expected).\\nConfirming the reviewers\\u2019 intuition, the example given regarding (x,y) position would be a valid use case of the benchmark.\\n\\n\\u201cI had the impression that this paper attempts to make a connection with the research about causality. For example, the name of the benchmark is \\\"CausalWorld\\\", and the text mentions \\\"opportunity to investigate causality\\\", and \\\"underlying structural causal model (SCM)\\\". However, it is unclear to me how this benchmark can help to study causality exactly. Could you elaborate on this point?\\u201d\\n\\n\\u2192 As we provide the tools to perform interventions on the causal and non-causal variables in the environment, many of the current causal structure learning algorithms from the literature that require intervention capabilities could be evaluated in settings that build upon CausalWorld. Also, by assessing generalization with respect to changes in causal variables (e.g. mass), we indirectly assess whether the agent has learned a causal notion of said variables (e.g. mass).\\nWe will make sure to elaborate on this point further in the manuscript.\\n\\n\\u201cIt seems that space A and space B in Table 2 (Appendix C) is an arbitrary split of a range. How these ranges are determined? Is there any motivation behind this split?\\u201d\\n\\n\\u2192 The splits are chosen such that the task is solvable in each of the two spaces (A and B); so that reasonable conclusions about transferability and generalization could be made from the experiments. Naturally, there are many other valid ways to split the parameter space, in fact users have the option to define their own custom intervention spaces.\"}",
"{\"title\": \"Reviwer5 Response - Thanks for your feedback\", \"comment\": \"We want to thank the reviewer for their time spent and useful comments that will help us improve this paper. We appreciate your comments and that you find the benchmark exciting and would be willing to use it.\\n\\n\\u201cWhile I don't see a link to the code, I would encourage the authors to design their API in a simple and standardized way, as that is likely what would motivate people to use CausalWorld instead of manually defining train/eval splits in their own physics simulators.\\u201d\\n\\n\\u2192 We thank the reviewer for pointing this out. The code base can be accessed under the following link https://drive.google.com/file/d/19wNBbwQkJyZBnbPOWvg6ZNGCRCwi5glj/view. The framework was also designed with a focus on simplicity, modularity and extensibility, to allow users to define their own block shapes, intervention spaces, task distributions, etc.\", \"an_example_of_intervening_on_the_environment\": \"task = generate_task(task_generator_id='stacked_blocks')\\n\\n env = CausalWorld(task=task, enable_visualization=True)\\n\\n env.reset()\\n\\n for _ in range(10):\\n\\n for i in range(200):\\n\\n obs, reward, done, info = env.step(env.action_space.sample())\\n\\n goal_intervention_dict = env.sample_new_goal()\\n\\n success_signal, obs = env.do_intervention(goal_intervention_dict)\\n\\n env.close()\\n\\n\\n\\u201cThe main weakness I see of the paper is the experimental section. The authors train a few SOTA RL algorithms on 3 different train configurations of increasing randomization, and test on 12 different eval configurations. First in terms of clarity, I found Figure 5 difficult to interpret. I think it would be helpful if for each of the Eval protocols it was clearly described what was changing.\\u201d\\n\\n\\u2192 Thanks for pointing out the potential confusion in figure 5; we updated the figure and caption accordingly (please look at figure 5 and figure 6 in the current pdf). \\n\\n\\u201cIn general I think the best way to present this would be to look at each pair of \\\"train domain, eval domain\\\" and the corresponding performance, with some clear description about what sort of generalization is needed.\\u201d\\n\\n\\u2192 We thank the reviewer for this suggestion. Indeed, presenting each pair as \\u201ctrain domain and eval domain\\u201d would be very helpful. However, we have 4 task distributions with 3 train domains for each and 12 test domains for each pair. Therefore, we did not want to discuss 144 train-eval pairs separately but decided to draw conclusions only on some particular interesting examples. Nevertheless, we agree that the protocols and curriculums required some additional clear descriptions which we now added to our manuscript. \\n\\n\\u201cAlso in terms of performance, it seems like when faced with anything more challenging than push/pick-place with limited randomization, all the SOTA algorithms fail even on the train domain (and as a result struggle on Eval domains as well). So one concern is that of the many domains presented in the benchmark, perhaps only a few are actually solvable by current RL algorithms during training. At the same time I think this indicates the challenges in learning generalizable policies, and may inspire better RL algorithms\\u201d\\n\\n\\u2192 One of our key motivations behind this work was to point out limitations in some of the most commonly used SOTA RL algorithms by proposing environment domains that can be extremely challenging. Although we hypothesize that some of the more challenging tasks might be still solvable for them, given enough reward engineering and computation resources, we are happy to see that the reviewer agrees with us that it may indicate the challenges in learning generalizable policies and may inspire better RL algorithms.\"}",
"{\"title\": \"Reviwer4 Response - Thanks for your feedback\", \"comment\": \"We want to thank the reviewer for their time spent and useful comments that will help us improve this paper. We appreciate your comment that you find the benchmark clearly written, valuable for sim-to-real research and nicely structured with interesting and important ideas.\", \"some_additional_details_on_the_motivation_of_the_robot\": \"We use the TriFinger robot from W\\u00fcthrich et al (2020) https://arxiv.org/abs/2008.03596 where setup and choice of the design is specified extensively. We decided for this setup as it is specifically designed to allow for dexterous fine manipulation beyond grasping, and because it is open-source (which will allow researchers to build their own instance and investigate sim-to real). Also, learning such control as opposed to the much simpler setting with a robotic gripper allows for much more sophisticated skills and capabilities in solving the proposed tasks and hopefully even more challenging ones in the future.\\n\\nWe will make sure to add this to our manuscript.\"}",
"{\"title\": \"Reviwer3 Response Part 2\", \"comment\": \"\\u201cIn figure 5, what is the time step reported (0 across all evaluations)?\\u201d\\n\\n\\u2192 Thanks for pointing out that this might be a potential confusion. We changed the paper accordingly. The timestep 0 specified in figure 5 is the timestep at which the interventions of each protocol are performed. However, the timestep for the score calculation is the last timestep.\\n\\n\\u201cIt is unclear how A and B differ..and what P0-P11 are\\u201d\\n\\n\\u2192 A description of these spaces is provided in page 5 under \\\"Training and evaluation spaces\\\".\\nE.g. for the mass of a block, space A is [0.015, 0.045] and space B is [0.045, 0.1]; this was chosen arbitrarily to make sure that the tasks are feasible in both spaces. A subset of the full list is defined in table 2 in the appendix.\", \"so_for_pushing_for_instance_as_shown_in_figure_5\": \"agents trained with interventions on the goal pose in space A (curriculum 1) end up interpolating to different goal poses coming from the same distribution as shown in the results for P5 (as expected).\", \"p0_p11_are_the_evaluation_protocols_used\": \"They are defined by interventions performed on specific variables to evaluate the agent. We can make the analogy here to an examiner who is testing the agent across different axes by intervening on the different environment variables in a specific way. The current protocols P0-P11 uses random interventions on specific variables using one of the spaces A or B.\\n\\n\\u201cThe authors could investigate .. different reward structures, other progressive curriculums..\\u201d\\n\\n\\u2192 The primary goal of this work is to provide the tools to build and evaluate generalizable agents in a more systematic fashion, rather than building generalizable agents for the tasks specified. The reward structures and the curriculums used are meant to serve as baselines for future comparison to other methods. Therefore, we leave exploring different methods that could potentially have better generalization for future work, since this was not the focus of this work. \\n\\n\\u201cSome qualitative results exploring the relationship between the identified and proposed causal variables (potentially through the lens of the agent performance) would be helpful\\u201d\\n\\n\\u2192The RL agents evaluated in this work did not identify causal variables explicitly, hence an explicit comparison of agents\\u2019 internal representations with the actual causal variables would be difficult. Nevertheless, by assessing generalization with respect to changes in causal variables (e.g mass), we indirectly assess whether the agent has learned a notion of the causality of said variables (e.g. mass) with respect to task success.\\n\\n\\u201cA qualitative experiment comparing the difference in learned behaviors between two policies and the difference in performance, to show that the performance reported by the benchmark does match intuition would be helpful\\u201d\\n\\n\\u2192 Provided in the supplementary website https://sites.google.com/view/causalworld-iclr/home, under \\u201cdisentangling generalization\\u201d two demonstrations are shown for two policies, one trained with no interventions (curriculum 0) and the other is trained with interventions on goal pose (curriculum 1). In the middle, a radial plot showing the different protocol scores of the two policies overlayed on top of each other. As can be seen, the policy trained with curriculum 0, ended up overfitting on the goal pose and the other one was able to generalize better to different goal poses (expected). With the tools we provide in CausalWorld, this could be measured more explicitly through measuring generalization across different axes. \\n\\n\\u201cThe authors limit their framework and results to the manipulation of simple block shapes.\\u201d\\n\\n\\u2192 We fully agree with the reviewer, what we present here is the first, but already extensive, iteration of the framework. In follow-up iterations, more causal variables will be exposed for instance. The framework was also designed with a focus on modularity and extensibility, to allow users to define their own block shapes, intervention spaces, task distributions, etc. In the attached code, there are already several tutorials showing how to accomplish some of these extensions. Additionally, some of the tasks provided are already very challenging to solve so it might not be necessary to consider more complex cases.\\n\\n\\u201cThe authors are motivated by facilitating research in causal structure learning.. potentially the causal graph-parameterized policy learning approach from \\u201cCausal Confusion in Imitation Learning\\u201dor similar algorithms, would be good to include\\u201d\\n\\n\\u2192 We fully agree that benchmarking with methods that directly learn the causal graph might be very insightful. However, as mentioned before, the focus here is to provide the tools to build and evaluate agents that could generalize. We leave exploring these specific methods for future work. \\nThanks for pointing out \\u201cCausal Confusion in Imitation Learning\\u201d as a potential method. However, this work uses imitation learning rather than learning a policy from scratch without prior data.\"}",
"{\"title\": \"Reviwer3 Response Part 1 - Thanks for your feedback\", \"comment\": \"We thank the reviewer for the feedback which will allow us to improve the paper!\\nAn important takeaway from your review is that we need to be clearer and more explicit about the motivation of the experiments. Would adding something along the lines of the following paragraph be helpful?\\n\\nOur intention with these experiments is to showcase the flexibility regarding curricula and performance evaluation schemes offered with CausalWorld, rather than solving new tasks or proposing new algorithms. The space of possible environments which can be created using CausalWorld is very large. Hence there are a myriad of interesting experiments and curricula one could set up, here we can only give some examples:\\nThe training curricula are chosen to show the two extremes of (0) a completely static environment and (2) all environment variables being randomized after each episode reset and some intermediate curriculum (1). In Fig 5 we show how CausalWorld can be used to evaluate performance of agents in a differentiated manner: Rather than just computing a single score, we can assess generalization ability with respect to changes in individual (or groups of) parameters and how it relates to different training curricula.\\n\\n\\u201cIn figure 4, it\\u2019s unclear to me what the new experimental result is here, given that the benchmark is meant to test transfer and generalization ability, and the results presented are on training curves. It seems that the main conclusion here is that the choice of learning curriculum is important for performance, which as the authors point out is unsurprising?\\u201d\\n\\n\\u2192 While ultimately we care about transfer and generalization ability (figure 5), we believe it is still important to also show training curves. This allows to see e.g. whether the agents picked up any success signal and whether they converged, which might explain part of the evaluation performance in figure 5. We will make sure to clarify this point further in the paper. Additionally, since we are presenting a novel benchmark, providing training results can be a useful reference for other researchers for reproducibility. \\n\\n \\u201cIn figure 4, curriculum 2 seems poorly motivated, in that full randomization without any curriculum at the beginning of training seems likely to fail as it does. Have the authors considered testing curriculums ..like ADR..\\u201d\\n\\n\\u2192 We agree with the reviewer that curriculum 2 was expected to fail, however, the primary motivation behind the choice of the curriculums is to showcase the generalization capacity of two extreme cases of interventions on the environment variables and a standard engineered curriculum as mentioned before. \\nIndeed ADR would be an interesting direction to explore in a follow-up work - thanks for pointing it out. The curriculums chosen are provided as baselines to prove the feasibility of some tasks in the benchmark, rather than engineering a robust policy.\\n\\n\\u201c\\u201cIt was challenging for me to follow Figure 5, since it was not clear what the training environments agents were being trained on, and which environments they were being evaluated under.\\u201d\\n\\n\\u2192 Thanks for pointing out the potential confusion in figure 5; we updated the figure and caption accordingly (please look at figure 5 and figure 6 in the current pdf). In our setting, we don\\u2019t explicitly define training and testing environments but rather we define two distributions (A and B) for each of the exposed variables in the environment. So the training and testing environments are defined by the curriculum, the evaluation protocols and the corresponding spaces for interventions. During training, space A is enabled for interventions (but that doesn\\u2019t mean the agent will be trained on all the values in A).\\nE.g. in pushing, consider the mass of the block: space A is [0.015, 0.045] and space B is [0.045, 0.1], so potentially depending on the chosen curriculum the agent can experience many values in A by having many interventions on the environment accordingly. In curriculum 0 and 1 the agent only explores one value for the mass (0.02), since there are no interventions on the mass. On the other hand, curriculum 2 explores many random values from space A interval due to the random interventions in this space.\\n\\nDuring evaluation, the current evaluation protocols test the agent against space A and space B (depends on the protocol), where some of the values were potentially seen during training. For instance protocol P0 tests the default setting of the task with no interventions performed on the environment (so this one for sure is seen during training). Another example P2 defines an evaluation protocol for random interventions sampled from space B of the block mass - which was not seen during training for the curriculums discussed. \\n\\nSimilar results for the evaluation of picking, pick and place and stacking2 are shown in the appendix(Figure 11). \\nWe hope the figure and caption modification clarified this point more.\"}",
"{\"title\": \"An interesting benchmark for causal structure and transfer learning based on simulation of a manipulation environment.\", \"review\": \"This paper proposes a a robotic manipulation benchmark for causal structure and transfer learning in a simulation environment considering 3D shape construction tasks given a set of blocks. Baseline results using model-free algorithms are provided for chosen tasks, e.g. pushing, picking, pick&place, stacking. It is also stated that a real version of the robot can be built (as it is open-sourced) for sim2real research. The paper is clearly written, nicely structured and, presents interesting and important ideas. It exposes a large set of parameters, e.g. properties of blocks (size, mass, pose), friction, goals for generalisation evaluations. Having a real-world counterpart makes it very valuable for sim2real research. Authors provide and discuss the relevant previous work detailing how their work connects to the existing literature.\", \"a_minor_comment\": \"The particular choice of the robot can be motivated, as it is a special design.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper presents a new benchmark, CausalWorld, for studying generalization, transfer learning, and causal structure learning in RL and robotics. This is a hugely important problem, and I think this benchmark has some clear advantages over existing benchmarks. The benchmark consists of a simulated three finger robot over a bin containing blocks, within which there are 8 \\\"families\\\" of tasks, (a) pushing, (b) picking, (c) pick and place, (d) stacking 2 blocks, (e) stacking many blocks, (f) general rearrangement, (g) more complex multi-block stacking, and (h) building towers. More importantly, for each family of tasks, there is controllable procedural generation of goals as well as controllable factors of the environment such as object sizes, masses, frictions, colors, etc.\\n\\nThis enables what I think is the key contribution of this paper - a procedural way to define training/evaluation splits where each split samples from different subspaces of the above controllable factors. This provides a systematic way of defining problems which require varying degrees of generalization, measuring the difficulty of such splits, and defining curricula within each split, which is critical to developing learning algorithms which are capable of this sort of generalization. While prior work (Yu et al, James et al) have defined many robotic tasks with some shared structure, one challenge is that it is difficult to say how much generalization one can expect between any two tasks which can be quite different, a problem which this benchmark takes a step towards addressing.\\n\\nLike the paper mentions, prior works have also used procedural generation over similar controllable factors like this paper does. In fact most physics simulators do allow varying these parameters directly. But, a simple API with a standardized interface to define these splits, as well as common splits that are used as benchmarks is still missing, and this paper takes an important step towards that. While I don't see a link to the code, I would encourage the authors to design their API in a simple and standardized way, as that is likely what would motivate people to use CausalWorld instead of manually defining train/eval splits in their own physics simulators.\\n\\nThe main weakness I see of the paper is the experimental section. The authors train a few SOTA RL algorithms on 3 different train configurations of increasing randomization, and test on 12 different eval configurations. First in terms of clarity, I found Figure 5 difficult to interpret. I think it would be helpful if for each of the Eval protocols it was clearly described what was changing. In general I think the best way to present this would be to look at each pair of \\\"train domain, eval domain\\\" and the corresponding performance, with some clear description about what sort of generalization is needed. Also in terms of performance, it seems like when faced with anything more challenging than push/pick-place with limited randomization, all the SOTA algorithms fail even on the train domain (and as a result struggle on Eval domains as well). So one concern is that of the many domains presented in the benchmark, perhaps only a few are actually solvable by current RL algorithms during training. At the same time I think this indicates the challenges in learning generalizable policies, and may inspire better RL algorithms.\\n\\nOverall, I think this is an exciting benchmark, and would be excited to use it.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Official Blind Review #3\", \"review\": \"### Summary\\n\\nMotivated by the difficulty of evaluating RL\\u2019s ability to transfer behaviors across environments, the authors propose the CausalWorld benchmark. Unlike prior benchmarks, CausalWorld exposes well-defined casual variables, in the form of task factors, and focuses on robotic manipulation of an open-source robot platform. The authors make CausalWorld easily usable for both training (in defining a learning curriculum) and evaluation (in targeting specific expected generalizations), and make it easy to extend. In their original release, the authors include eight concrete tasks to test generalization, and present baseline results on these tasks.\\n\\n---\\n\\n### Positives\\n\\n- The paper is strongly motivated and tackles a real and practical problem. Evaluating transfer in RL agents has been challenging, especially for robotics, and the authors\\u2019 proposed benchmark framework could be useful in addressing this.\\n- The authors\\u2019 benchmark supports useful behavior both for training, in gradually varying the task distributions, and in testing, for evaluating generalization ability. Additionally, since it is tied to a real world, open source robot platform, this benchmark in the future could also be used to evaluate sim2real transfer.\\n- The authors\\u2019 framework is defined in a way that seems easy-to-extend and supports multiple use cases, including custom \\u201ctask generators\\u201d for defining new tasks or goals, and \\u201cintervention actors\\u201d to define a learning curriculum.\\nThe paper provides relatively strong baseline experiments, with quantitative results across several model-free RL algorithms (PPO, SAC, TD3), and multiple potential curriculum techniques.\\n\\n---\\n\\n### Negatives\\n\\nThe current iteration of the experiments, with figures 4 and 5, are unclear and confusing. Specifically:\\n\\n- In figure 4, it\\u2019s unclear to me what the new experimental result is here, given that the benchmark is meant to test transfer and generalization ability, and the results presented are on training curves. It seems that the main conclusion here is that the choice of learning curriculum is important for performance, which as the authors point out is unsurprising?\\n- In figure 4, curriculum 2 seems poorly motivated, in that full randomization without any curriculum at the beginning of training seems likely to fail as it does. Have the authors considered testing curriculums where the domains for the causal variables get progressively more challenging? For instance, automatic domain randomization (ADR) from \\u201cSolving rubik\\u2019s cube with a robot hand\\u201d may be a useful curriculum to compare. \\n- It was challenging for me to follow Figure 5, since it was not clear what the training environments agents were being trained on, and which environments they were being evaluated under. Further, do these results for pushing hold across other tasks (picking, pick and place, stacking2)?\\n- In figure 5, what is the time step reported (0 across all evaluations)?\\n- Figure 5 shows that there is some generalization to tasks in space A and B, but it is unclear how A and B differ, what the variables between both are, which environments in A were trained on, and what P0-P11 are.\", \"in_addition_i_think_some_other_experimental_results_would_be_helpful\": \"- The authors could investigate a more detailed analysis on different reward structures, other progressive curriculums, and other methods that claim better generalization and transfer performance. \\n- Some qualitative results exploring the relationship between the identified and proposed causal variables (potentially through the lens of the agent performance) would be helpful.\\n- A qualitative experiment comparing the difference in learned behaviors between two policies and the difference in performance, to show that the performance reported by the benchmark does match intuition would be helpful.\\n\\nApart from the experimental results, I have some other broader (and potentially less pressing) concerns:\\n\\n- The authors limit their framework and results to the manipulation of simple block shapes. Manipulating non-block objects would result in more complex goals, introducing more causal variables that are potentially harder to disentangle and represent cleanly, but are still important for real-world applications. \\n- The authors are motivated by facilitating research in causal structure learning, but this paper focuses almost exclusively on studying transfer learning and generalization ability. Potentially this is mitigated by benchmarking causal learning algorithms that try to directly learn the causal graph or reason between causal variables. For instance, potentially the causal graph-parameterized policy learning approach from \\u201cCausal Confusion in Imitation Learning\\u201d, or similar algorithms, would be good to include.\\n\\n---\\n\\n### Recommendation\\n\\nOverall, I vote for rejecting. I think the benchmark is well motivated, but not backed up with strong experimental results. The motivation for the benchmark is to show that the framework can be used to study transfer performance, but the current experimental results do not convince me that the framework makes it easy to uncover new insights in practice. One reason to potentially accept the benchmark is that it seems easy to extend, but this is also difficult to evaluate from the limited experiments presented.\\n\\nIf the authors were to respond to some of my comments above, by providing a better understanding of the figures and experiments (in case I am misinterpreting the current results), and by showing the utility of the benchmark, then some of my concerns would be addressed.\\n\\n---\\n\\n### Minor feedback\\n\\n- Which hand designed dense reward function is being used? I see this is present in the supplementary figures, I would also add a reference in the main text.\\n- Which observation spaces were the figures trained/evaluated in (state or pixel)?\\n\\n----\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Useful benchmark for studying generalization of RL\", \"review\": \"This paper proposed a new benchmark for studying reinforcement learning and its generalization in the context of the robotic manipulation problem. To study the generalization of a learned policy, the proposed benchmark is equipped with an interface that makes intervention easy. This interface helps to define a training space and an evaluation space so that one can systemically study both in-distribution and out-of-distribution generalization of a learned policy. At the same time, the proposed benchmark simulates an open-source robot platform, which makes sim2real transfer experiments easier.\", \"strengths\": [\"This paper proposed an RL benchmark with many good properties: systematic intervention of environment distribution and potential application to sim2real transfer experiments\", \"The source code of the benchmark provided with the submission is clean and well documented, so it could benefit future research based on this work.\", \"Proposed evaluation protocol gives an insight on how this benchmark can be used to evaluate generalization of RL agent.\"], \"weaknesses\": [\"One concern I have about this benchmark is that the training difficulty may hinder the analysis of generalization. It is expected that a rich training distribution would lead to better generalization, but it seems rich training distribution makes training difficult and results in worse performance on evaluation distribution.\", \"As far as I understand, there are a few predefined tasks, and each task distribution cannot be intervened by modifying some task-relevant parameters. However, I can imagine parameterizing the tasks. For example, we can parameterize push task distribution by introducing a range of (x, y) position of block initial positions, and introducing a range of distance from initial position to goal position. If I have misunderstood the detail of the task parameterization, please elaborate on this point.\"], \"questions_to_authors\": [\"I had the impression that this paper attempts to make a connection with the research about causality. For example, the name of the benchmark is \\\"CausalWorld\\\", and the text mentions \\\"opportunity to investigate causality\\\", and \\\"underlying structural causal model (SCM)\\\". However, it is unclear to me how this benchmark can help to study causality exactly. Could you elaborate on this point?\", \"It seems that space A and space B in Table 2 (Appendix C) is an arbitrary split of a range. How these ranges are determined? Is there any motivation behind this split?\"], \"recommendation\": \"I recommend accepting this paper because it could benefit the field by helping people to easily study generalization in a complex robotic manipulation setting. To the best of my knowledge, no existing open-sourced RL environment for robotic manipulation does not support systematic intervention of the environment distribution.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
OCRKCul3eKN | Addressing Extrapolation Error in Deep Offline Reinforcement Learning | [
"Caglar Gulcehre",
"Sergio Gómez Colmenarejo",
"ziyu wang",
"Jakub Sygnowski",
"Thomas Paine",
"Konrad Zolna",
"Yutian Chen",
"Matthew Hoffman",
"Razvan Pascanu",
"Nando de Freitas"
] | Reinforcement learning (RL) encompasses both online and offline regimes. Unlike its online counterpart, offline RL agents are trained using logged-data only, without interaction with the environment. Therefore, offline RL is a promising direction for real-world applications, such as healthcare, where repeated interaction with environments is prohibitive. However, since offline RL losses often involve evaluating state-action pairs not well-covered by training data, they can suffer due to the errors introduced when the function approximator attempts to extrapolate those pairs' value. These errors can be compounded by bootstrapping when the function approximator overestimates, leading the value function to *grow unbounded*, thereby crippling learning. In this paper, we introduce a three-part solution to combat extrapolation errors: (i) behavior value estimation, (ii) ranking regularization, and (iii) reparametrization of the value function. We provide ample empirical evidence on the effectiveness of our method, showing state of the art performance on the RL Unplugged (RLU) ATARI dataset. Furthermore, we introduce new datasets for bsuite as well as partially observable DeepMind Lab environments, on which our method outperforms state of the art offline RL algorithms.
| [
"Addressing Extrapolation Error in Deep Offline Reinforcement Learning"
] | Reject | https://openreview.net/pdf?id=OCRKCul3eKN | https://openreview.net/forum?id=OCRKCul3eKN | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"2HXzPHhbnq1",
"aybx2j-saax",
"ZqgTwPeUe7",
"nFMLOn_78T-",
"r0AfKp6vDRs",
"euKFHCgVGP4",
"2iJFpFf_rpI",
"h2i-JkSyIqP",
"WZFImqJnQmI",
"N8Jp-uVVMVC",
"MtENuUk-gjG",
"Dws57_LhFLK"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040513169,
1605877872050,
1605762897643,
1605642163272,
1605641730515,
1605640938929,
1605640693791,
1605640615332,
1605640358675,
1603835892409,
1603729735483,
1603150986118
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3585/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3585/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3585/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3585/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3585/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3585/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3585/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3585/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3585/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3585/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3585/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposed a new method for improving offline RL. AC thinks that the paper has a potential, but all reviewers suggest rejection as the current write-up is quite poor. This causes many misunderstandings of reviewers. The authors clarify some misunderstandings/concerns in the discussion phase, but did not update the draft accordingly. Hence, AC cannot suggest acceptance, given the current form.\"}",
"{\"title\": \"On theory and the expectations to solve general offline RL for any MDP with any behavioral policy\", \"comment\": \"Thanks for your quick reply. We don't claim that BVE doesn't have any limitations; we will make this clearer in the paper. Nevertheless, we stand by our claim that the theorem R2 quoted, being an upper bound, and the references we cited in the paper support our claim regarding the one-step policy improvement.\\n\\nWe think that the reviewer expects that \\\"to claim to be solving a general offline RL, one needs to be able to solve any MDP with any behavioral policy.\\\" However, we find this expectation unrealistic.\\n\\nFor example, let's assume having a dataset generated by two MDPs, both with one state and two actions: the first action always gives a reward of 0 in both MDPs. In the first MDP, the second action gives a reward of +1; in the second MDP, the same action receives a reward of -1. On the other hand, the behavioral policy, in both cases, always chooses the first action. No offline RL algorithm can successfully solve both of these MDPs at the same time with any number of policy improvements.\\nAs such, having access to a reasonable behavior policy is an inherent part of the offline setting, not our method's limitation. This is also likely why no other published offline RL method gives the convergence guarantees for any MDP/behavioral policy as R2 seems to expect from us. \\n\\nMoreover, R2 also asks us to compare against BC. We already compare our methods against BC in our experiments. We report BC results on every dataset in the paper except bsuite because, unsurprisingly, BC performed very poorly on that dataset due to the noisy transitions and small dataset sizes. Instead, we decided to include BCQ on bsuite which is a stronger offline RL baseline.\"}",
"{\"title\": \"Short Reply\", \"comment\": \"Thanks for the response.\\n\\n\\\"With a decent $\\\\pi^0$ policy, you should expect to converge quickly. \\\"\\nThat might be true. However, the claim of the paper would be different. If the paper only considers the situation where the behavior policy is good or one PI step is sufficient, the paper should (1) explicitly mention it and (2) compare to appropriate baselines for the setting (e.g., behavioral cloning or safe policy improvement from [1]). Another issue is that how can we know if one PI step is sufficient? On the other hand, if the paper claims that their method works in general offline RL setting (as mentioned in my original review), I would consider the behavior value estimation method has a big limitation. \\n\\n[1] Thomas, Philip, Georgios Theocharous, and Mohammad Ghavamzadeh. \\\"High confidence policy improvement.\\\" In International Conference on Machine Learning. 2015.\"}",
"{\"title\": \"Clarifications to All Reviewers\", \"comment\": \"We would like to thank all the reviewers for their valuable comments and suggestions.\\n\\nWe would like to highlight that some of our reviewers have conflicting views on behavior value estimation (BVE):\\n\\n* Reviewer 1 states that Behavior Value Estimation is not novel, because it is similar to behavior constrained offline RL methods, like SPIBB, ABM, CRR, BEAR, BCQ, CQL. However, none of these methods attempt to estimate the value of the behavior policy directly, nor do they perform policy improvement in one step. Instead, all of them jointly estimate the value of a learned policy, and improve the policy according to that value estimate. (In BCQ this policy is implicit, but Q is the value of the learned policy $\\\\pi$, and not the behavior policy $\\\\pi_b$). This combination is prone to over-estimation. And by using BVE, we bypass this issue.\\n\\n* Reviewer 2 seems to disagree with Reviewer 1 on the novelty of BVE, but claims that it is not theoretically sound. R2 also cites a theorem, but as we discuss in our response to the reviewer, the theorem cited is a worst-case analysis and doesn\\u2019t make claims about the exact number of policy improvement steps needed given a policy in a practical setting.\\n\\nRegarding the experiments and hyperparameters as raised by AnonReviewer1 and AnonReviewer2, beta and v are only tuned on Atari online policy selection games, and the best settings are used for the rest of the tasks. In addition, on Deepmind Lab and bsuite, we did a grid search for the regularization coefficient and learning rate.\\n\\nWe are going to fix the typos that are pointed out by the reviewers.\"}",
"{\"title\": \"About Typos and Experiments\", \"comment\": \"> In its current form, the experimental part of the paper is immature. A uniform structure is missing. The statements are not sufficiently substantiated...\\n\\nWe found that combining these three methods (BRr) is required to be used together to achieve the best results, as we showed in our experiments. That is why we decided to include all of them in this paper. We wanted to develop a general recipe that uses the methods introduced in our paper to achieve the best results for a given problem. For example, if the dataset is low-coverage, we suggested using BVE. The ranking loss seems to improve across all the datasets, and the reparameterization trick helps with the stability issues during the training (especially with Q-learning.)\\n\\n> The claim \\\"this one step is typically sufficient for dramatic gains as we show in our experiments (see for example Fig. 9)\\\" is not sufficiently substantiated, because one cannot speak of \\\"typically\\\", as only \\\"Atari online policy selection games\\\" are considered...\\n\\nWe only used the Atari online policy selections games for ablations and hyperparameter search. Figure 9 is showing the robustness of the model to the reward distribution and the dataset size on on-policy selection games. However, we provide the results on offline-policy selection games in Figure 4. We will fix that in the paper to refer to Figure 4 in that sentence.\\n\\n> Questions: What is the meaning of error bars in Fig. 3 and Fig. 7? What is meant by \\\"BC\\\"?\\n\\nWe use BC to mean \\\"behavioral cloning\\\", similarly to previous literature [1]. It is a very common acronym in reinforcement and imitation learning literature. We will define this acronym in the paper explicitly.\\n\\n[1] Torabi, Faraz, Garrett Warnell, and Peter Stone. \\\"Behavioral cloning from observation.\\\" arXiv preprint arXiv:1805.01954 (2018).\\n\\n> What is meant by \\\"discrete offline RL algorithms\\u201c? it is not clearly described that discrete actions are required.\\n\\nBy \\\"discrete offline RL algorithms\\u201c we refer to offline RL algorithms with discrete actions.\\n\\n> In the sum, i runs from 0 to 100 and is divided by 100\\u2026\\n\\nThanks we will fix these typos.\\n\\n> In Figure 8, the measurement results should not be connected by lines. Lines should only be used for fits or predictions of theory...\\n\\nWe will incorporate those changes suggested by the reviewer to the appendix.\\n\\n> Please do not use \\\\pm for the standard deviation, but for the specification of the uncertainty (aka error of the measurement) e.g. the standard error.\\n\\nThanks we will change that to the standard error.\"}",
"{\"title\": \"About the Quoted Theorem and Limitations\", \"comment\": \"> ... The authors mention \\u201cFortunately, this one step is typically sufficient for dramatic gains as we show in our experiments (see for example Fig. 9). This finding matches our understanding that policy iteration algorithms typically do not require more than a few steps to converge to the optimal policy (Lagoudakis & Parr, 2003; Sutton & Barto, 2018, Chapter 4.3)\\u201d. I think this is not true. Even in tabular case, policy iteration requires a polynomial time w.r.t. the size of the state space and the action space (e.g., see Theorem 1.14 of [1]). I think it is possible to construct a family of MDPs and some behavior policies such that one step of policy improvement is not sufficient...\\n\\nWhile it is possible to construct an MDP where 1-step is not enough, so far on the datasets that we have tried, this does not seem to be an issue. In particular, the improvement obtained with one-step of policy improvement in low-coverage datasets provides quite significant gains compared to regular Q-learning in the same settings.\\nWe would like to point out that the theorem cited in [1], forms an upper bound to the number of steps needed in the worst case scenario. It proves that policy iteration needs at most $O(polynomial)$ steps to reach the optimal performance starting from any $\\\\pi^{0}$. The theorem, therefore, supports our claims that policy iteration is efficient. With a decent $\\\\pi^{0}$ policy, you should expect to converge quickly. In the case where you start from the optimal policy, the number of improvement steps needed is actually 0.\\n\\n> The related work section should not just list previous works, but explain how the proposed algorithm is different from or similar to the existing algorithms.\\n\\nIn the related work section, we highlight some of the most important papers published recently on offline RL, in general. Nevertheless, we talk about other closely related works contrasting to ours throughout the paper.\\n\\n> I have some questions to clarify my understanding of the paper: I am not sure what is the propose of Appendix A? Maybe I missed some important points here, but it has already been shown that off-policy + function approximation + bootstrapping can diverge (Sutton and Barto\\u2019s book).\\n\\nIt is true that off-policiness with a form of function approximation and bootstrapping has been known to diverge. But it hasn\\u2019t been shown before that the divergence necessarily will also happen with the neural networks as well. In Appendix A we give an example of such a case and show an example of how a non-linear neural network\\u2019s q-value estimation can diverge which to best of our knowledge was not shown before. \\n\\n> What is the exact definition of extrapolation error used in this paper?...\\n\\nThanks for your comment. We tried to define these terms and try to give references to the related literature in Section 2.1. We will try to make make them more clear in the paper.\\n\\n> Regarding the experiments: How were \\u03bd, \\u03b2, and \\u03b1 selected? The algorithm introduces more hyper-parameters, so I wonder do you have any comments on hyperparameter search (e.g. do existing algorithms also require tuning extra hyper-parameters?). Do you have any reason why it needs a larger mini-batch and a smaller learning rate to update \\u03b1 ?\\n\\nFor the hyperparameter tuning please see our response to AnonReviewer1. Yes our motivation was basically to reduce the variance or noise in the gradients that arises during the training with small minibatches. Since \\u03b1 is just a single scalar, the using larger minibatches comes with almost no extra cost. \\n\\n> Minor comments \\u03b8\\u2032 in Equation (2) is not defined\\n\\nThanks for pointers, we will fix this in the paper.\"}",
"{\"title\": \"Hyperparameters and Typos\", \"comment\": \">... I am curious whether BRr and QRr perform well in this domain? If not, could you please explain why different combinations of the three propose techniques perform differently in different domains? Is there any principle about which combination should be used in which kind of dataset?\\n\\nIn our Atari experiments, we noticed that the performance difference between QRr and BRr was not very significant. As we noted in the paper, we noticed that Behavior Value Estimation works very well if the coverage in the dataset is low (see Figure 9 for Atari coverage experiments). We also realized that *Behavior Value Estimation* significantly improves the performance on Deepmind Lab datasets over regular Q-learning (R2D2). Thus we decided to only focus on BR. We confirmed this by comparing QR and BR on the Deepmind Lab on the seekavoid dataset. We ran experiments with BRr as well, and the results were only slightly better than BR. We will add the QR and BRr results to the paper as well. Unlike QR, since BR doesn\\u2019t get affected by over-estimation, we realized the reparameterization trick provides a smaller improvement when used with BR.\\n\\n> In Figure 10 and 11, it seems that on Atari games, the larger weight of the ranking regularization means better performance. Then why \\\"0.05 seems to be the optimal choice for the ranking regularization hyper-parameter\\\"? Is the hyper-parameter value 0.05 used across all the datasets? Is the proposed approach sensitive to this value?\\n\\nFigure 10 just shows how the regularization hyperparameter affects the action gap and Figure 11 shows how that hyperparameter influences over-estimation. They don\\u2019t tell anything with respect to the performance. According to Figure 10, we can arrive at the conclusion that increasing the regularization coefficient can result in lower estimation error and better optimization but doesn\\u2019t necessarily mean that it will cause better performance as we discussed in the text. In Figure 11, we show the effect of increasing the regularization on the overestimation of the Q network when evaluated in the environment where the hyperparameter we used for atari (0.05) achieves lower over-estimation error.\\n\\nWe used the hyperparameter value 0.05 for Atari and Deepmind Lab datasets but on bsuite we realized using larger regularization values provide better performance for bsuite. In general, we would say relatively our method is quite robust to the regularization coefficient for the Atari, the best hyperparameters found on online-policy selection games (see Figure 5) performed very well on offline policy selection games (see Figure 4) where the hyperparameter search is not allowed.\\n\\n> In Algorithm 1, is it a typo of using \\u03b3? In the main text, \\u03b3 means the discounting factor, then what's the definition of L(\\u03b3). The details of the re-parameterization of Q-network need more clarification.\\n\\nYes, that is a typo, it was supposed to be alpha. We will fix this in the paper and clarify reparameterization of the Q-network.\"}",
"{\"title\": \"On the Motivation, Design Choices and Hyperparameters\", \"comment\": \"> In equation (5), the specific formulation of the weight of regularization, e.g. \\u0392 in exp((GB(s)\\u2212Es\\u223cD[GB(s)])/\\u03b2) is not well-motivated. Why we use exp function instead of another simpler monotonically increasing function? Why we need the coefficient \\u03b2?\\n\\nThis formulation is based on the filtering mechanism proposed in the CRR paper. In CRR paper, they have used the advantage function from the critic to filter out the transitions. Moreover, CRR paper ablated and reported better results with exp function instead of for instance the indicator function. In contrast to CRR, in this paper, we rely on the discounted returns, which, according to our preliminary experiments, seem to be more reliable since we don\\u2019t have access to a policy directly. \\n\\n> How to choose the value of \\u03bd and \\u03b2 (the given value in the paper are just randomly picked)?\\n\\nWe performed a small grid search (v in [0.005, 0.05, 0.5] and \\u03b2 in [2, 1, 0.5]) on a small number of Atari games. We used the best setting from this search for all other datasets and tasks.\\n\\n> In equation (7), the regularization for learning the scale parameter is not well-motivated either. Is it really necessary to have this regularization? \\n\\nYes, in our preliminary experiments, we have verified that disabling the regularization causes a degradation in the performance. We can add this to the appendix, if the reviewers find this insight useful.\\n\\n> Why square function here is better than other functions?\\n\\nWe don\\u2019t claim that squared hinge loss is better. Square hinge loss is very common in the learning to rank literature [3] due to some of its properties mostly for SVMs and kernel machines. However, with deep learning models we don\\u2019t think the choice between square hinge loss and non-squared one will make a big difference. We decided to use the squared hinge loss for two reasons: first, it was easier to establish its relationship to the other known RL methods such as ranking policy gradients, and second it punishes the small errors less and large errors more.\\n\\n[3] Chapelle, Olivier, and S. Sathiya Keerthi. \\\"Efficient algorithms for ranking with SVMs.\\\" Information retrieval 13, no. 3 (2010): 201-215.\\n\\n> Without a theoretical ground, all these design choices and value choices are just like heuristic or magic numbers...\\n\\nWe agree that a theoretical grounding might be helpful in terms of understanding, but we decided to go with more breadth with respect to environments and empirical analysis in this paper rather than providing novel theoretical insights. The theory is useful, but we need to believe that for offline RL literature we need more practical papers providing empirical insights on the datasets we have. We hope that our clarification above has clarified the reviewers misinterpretation.\\n\\n> In section 4.1 and 4.2, the performance of the proposed method is not significantly better than the baselines (the error bar of the proposed method overlaps with the error bar of the baselines).\\n\\nIn Section 4.1, we used standard deviation for the plots, however, after we switched to standard errors and the error bars are better both in section 4.1 and 4.3. The errors reported in 4.2 are the error across the Atari games. We realized that all methods can have very high variances across the Atari games. The part of the reason is because some Atari games need to be run long and are very large-scale, however, sometimes due to the cluster infrastructure, learners can be interrupted or can go down because of hw issues, which can introduce noise in the performance. Let us note that our baselines also have high noise in 4.2 as well, which we took from [4].\\n\\n[4] Gulcehre, Caglar, Ziyu Wang, Alexander Novikov, Tom Le Paine, Sergio G\\u00f3mez Colmenarejo, Konrad Zolna, Rishabh Agarwal et al. \\\"RL Unplugged: Benchmarks for offline reinforcement learning.\\\" arXiv preprint arXiv:2006.13888 (2020).\"}",
"{\"title\": \"On the novelty of BVE, Lack of Theory and Significance\", \"comment\": \"> Behavior value estimation removing the max operator to alleviate over-estimation seems not novel, because many previous work in offline RL (e.g. SPIBB, ABM, CRR)... Also, the one-step policy improvement in section 3.1 is not novel,...\\n\\nThe reviewer states that Behavior Value Estimation (BVE) is not novel, because it is similar to behavior constrained offline RL methods, like SPIBB, ABM, CRR, BEAR, and BCQ. However, none of these methods attempt to estimate the value of the behavior policy directly, nor do they perform policy improvement in one step. Instead, all of them jointly estimate the value of a learned policy, and improve the policy according to that value estimate. (In BCQ this policy is implicit, but Q is the value of the learned policy $\\\\pi$, and not the behavior policy $\\\\pi_b$). This combination is prone to over-estimation. We bypass this issue with BVE.\\n\\n> Significance: The main concern of the proposed method is whether it is theoretically sound and performs well ...\\n\\nIn this paper, we focused on discrete actions, and we showed extensive results on many discrete-action tasks across three different domains: bsuite, Atari, and Deepmind Lab. RL with discrete actions can be applied to real-world applications, and continuous control problems can be cast as a discrete control problem. Additionally, there have been other impactful works in offline RL which focused on discrete actions [1,2]. However, we believe that our method can be extended to continuous action domains.\\n\\nHyper-parameter tuning is a typical process throughout DL and Deep RL literature. While we believe that our method is robust to hyper-parameters, this is not something any DL system can guarantee except through empirical evidence. \\n\\n[1] Fujimoto, Scott, Edoardo Conti, Mohammad Ghavamzadeh, and Joelle Pineau. \\\"Benchmarking batch deep reinforcement learning algorithms.\\\" arXiv preprint arXiv:1910.01708 (2019).\\n\\n[2] Agarwal, Rishabh, Dale Schuurmans, and Mohammad Norouzi. \\\"An Optimistic Perspective on Offline Reinforcement Learning.\\\" (2020). \\n\\n> \\u2026 The proposed technique (1) can be better than behavioral policy only when the behavior policy is not deterministic ...\\n\\nThere are two parts to answer this concern. The first one is regarding how likely it is for the behavior policy to be deterministic and greedy concerning its value estimation. In this paper, we focused on stochastic datasets. The datasets generated from real-world environments are rarely deterministic and greedy, mainly because the world is noisy, and greedy policies can not give good coverage to learn policies. Often demonstrations gathered from humans can be considered as stochastic too. Thus, we decided to focus on environments generated by stochastic policies.\\n\\nNow, in terms of evidence that one-step of policy improvement performs better, in Figure 8, we have run experiments to verify how well our proposed methods perform when compared to BC, R2D2, and CQL while changing the epsilon (noise level) if fixed behavior policy. In Figure 8, noise level 0 corresponds to a deterministic and a greedy policy. As shown in that figure, both BVE and R2D2 perform equally poorly in that setting. In contrast, behavior cloning, and BVE with ranking regularization perform much better.\\n\\n> Overall, the proposed techniques (2) (3) are not quite convincing without the support of the theory.\\n\\nUnfortunately, it is unclear for us what the reviewer means by theory in this comment. Is the question whether the updates will converge? Convergence proofs in general can not be provided when one deals with non-linear function approximators. Even ignoring the function approximator, most DRL systems employ additional terms to the loss (e.g. entropy regularization for actor-critic) which breaks any hope of ensuring that the updates will lead to a fixed point. \\n\\nE.g. For a tabular case, (2) will simply make the value of any action-state pair not in the dataset to be below the value of the action-state pairs within the datasets. One could potentially build on this to show that in the tabular case, (2) can not force learning to diverge from the best solution you can find given the data that you have (e.g. if you initialize under the optimal policy under the data that you have you will not move away from it). However such a proof will have minimal implications once we go to the large practical scale of things that involve neural networks. To what extent do such proofs (or theory) provide any extra confidence above the empirical evidence? \\n\\n(3) is guaranteed to prevent the divergence by bounding the critic, in some sense it is akin to vmin and vmax in distributional RL, however instead of setting the vmin and vmax as a hyperparameter, here we proposed to learn the scale of the Q-network\\u2019s outputs from the data.\"}",
"{\"title\": \"Good empirical results but may be difficult to work in other domains\", \"review\": \"Summary:\\nThis paper focuses on the problem of Q value over-estimation in offline reinforcement learning and proposes three approaches (tricks) to help solve this problem. (1) estimate Q value of behavior policy avoiding max-operator in Q learning and take greedy action according to the behavior value estimation. (2) introduce ranking loss to push down the value estimation of all unobserved state-action pairs to avoid over-estimation. (3) use tanh operator to bound the range of Q value estimation, and learn a scale parameter with regularization term. The experimental results on several domains (Atari, Bsuite, Deepmind Lab) with discrete action space show performance better than existing algorithms.\", \"clarity\": \"This paper is generally written clearly, though I have several questions about the technique and experiments, which may need more clarification. Please see 'Cons' part for the detailed questions.\", \"originality\": \"The techniques of ranking regularization and re-parameterization of Q-values are novel in the literature of offline reinforcement learning. Behavior value estimation removing the max operator to alleviate over-estimation seems not novel, because many previous work in offline RL (e.g. SPIBB, ABM, CRR) use bellman operator without max operature for policy evaluation of target policy. Also, the one-step policy improvement in section 3.1 seems not novel, many previous work (e.g. BEAR, BCQ) sample the action according to the learned policy and take one of the sampled action with maximum value estimation at test time in the implementation.\", \"significance\": \"The main concern of the proposed method is whether it is theoretically sound and performs well in the other domains (such as domains with continuous action space) without much tuning of hyper-parameters (weight of regularization term when combining the proposed approaches). The three tricks are intuitive and might be useful in practice, but I am not sure whether the contributions are significant enough to match the acceptance bar of ICLR.\", \"pros\": [\"This paper attempts to solve a significant problem (extrapolation error in offline RL).\", \"The paper explains the intuition behind each proposed approach clearly.\", \"The experimental results are good on several domains with discrete action space, better than the baseline methods.\"], \"cons\": \"* In section 1, \\\"Surprisingly, this technique with only one round of improvement ... often outperform existing offline RL algorithms\\\" seems a bit misleading and overclaiming. The proposed technique (1) can be better than behavioral policy only when the behavior policy is not deterministic and greedy with respect to the value estimation. And the experiment only verifies that it can outperforms the existing methods in some specific datasets collected on domains with discrete action. I doubt whether this technique can \\\"often outperform\\\" the existing algorithms (e.g. ABM, CRR, CQL) on continuous control tasks.\\n* Overall, the proposed techniques (2) (3) are not quite convincing without the support of the theory. In equation (5), the specific formulation of the weight of regularization, e.g. $exp((G^B(s)\\u2212E_{s\\u223cD}[G^B(s)])/\\u03b2)$ is not well-motivated. Why we use exp function instead of another simpler monotonically increasing function? Why we need the coefficient \\u03b2? How to choose the value of \\u03bd and \\u03b2 (the given value in the paper are just randomly picked)? In equation (7), the regularization for learning the scale parameter is not well-motivated either. Is it really necessary to have this regularization? Why square function here is better than other functions? Without a theoretical ground, all these design choices and value choices are just like heuristic or magic numbers. The experiments show these choices can work on some dataset (perhaps with much tweak and tuning), but we are not confident whether they can also work on new datasets.\\n*In section 4.1 and 4.2, the performance of the proposed method is not significantly better than the baselines (the error bar of the proposed method overlaps with the error bar of the baselines).\\n*In section 4.1 and 4.2, QRr and BRr are mainly considered, but in section 4.3, only B and BR are considered. I am curious whether BRr and QRr perform well in this domain? If not, could you please explain why different combinations of the three propose techniques perform differently in different domains? Is there any principle about which combination should be used in which kind of dataset?\\n*In Figure 10 and 11, it seems that on Atari games, the larger weight of the ranking regularization means better performance. Then why \\\"0.05 seems to be the optimal choice for the ranking regularization hyper-parameter\\\"? Is the hyper-parameter value 0.05 used across all the datasets? Is the proposed approach sensitive to this value?\\n*In Algorithm 1, is it a typo of using \\u03b3? In the main text, \\u03b3 means the discounting factor, then what's the definition of $\\\\mathcal{L}(\\\\gamma)$? The details of the re-parameterization of Q network need more clarification.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Promising experimental results but the algorithm has some limitations\", \"review\": \"##### Summary & recommendation\\nThe paper proposes an offline RL algorithm, which consists of three techniques: behavior value estimation, ranking regularization, and reparametrization of Q function, to reduce overestimation errors. The algorithm is evaluated on several benchmark datasets. \\n\\nOverall, the paper is clearly written. The empirical results also seem promising. However, my main concern is that the proposed algorithm, especially with the behavior value estimation technique, has a big limitation for solving the offline RL problem (details come below). Moreover, I don\\u2019t think the paper provide enough justification for using all three techniques other than experiment results. It would have been better if the authors could discuss why we need all these three techniques (e.g. maybe behavior value estimation + ranking regularization is similar to behavior regularization?), rather than just combing these tricks to make the algorithm work empirically. I think there is still room for improvement before publishing this paper. Therefore, I recommend to reject the paper.\\n\\n##### Supporting arguments\\nThe goal of an offline RL algorithm is to find a nearly optimal policy $\\\\pi$ from an offline dataset. However, the behavior value estimation technique is just learning the value function for the behavior policy. Even though we perform a single policy improvement step in the test time, it is generally not sufficient to obtain a nearly-optimal policy, especially when the behavior policy is far from optimal. \\n\\nThe authors mention \\u201cFortunately, this one step is typically sufficient for dramatic gains as we show in our experiments (see for example Fig. 9). This finding matches our understanding that policy iteration algorithms typically do not require more than a few steps to converge to the optimal policy (Lagoudakis & Parr, 2003; Sutton & Barto, 2018, Chapter 4.3)\\u201d. I think this is not true. Even in tabular case, policy iteration requires a polynomial time w.r.t. the size of the state space and the action space (e.g., see Theorem 1.14 of [1]). I think it is possible to construct a family of MDPs and some behavior policies such that one step of policy improvement is not sufficient. In such case, I think the proposed algorithm would not work well. \\n\\nThe related work section should not just list previous works, but explain how the proposed algorithm is different from or similar to the existing algorithms.\", \"i_have_some_questions_to_clarify_my_understanding_of_the_paper\": [\"I am not sure what is the propose of Appendix A? Maybe I missed some important points here, but it has already been shown that off-policy + function approximation + bootstrapping can diverge (Sutton and Barto\\u2019s book).\", \"What is the exact definition of extrapolation error used in this paper? The paper mentions extrapolation over-estimation, extrapolation under-estimation, training time extrapolation error, and testing time extrapolation error, but I don\\u2019t see a clear definition of these terms.\", \"Regarding the experiments: How were $\\\\nu$, $\\\\beta$, and $\\\\alpha$ selected? The algorithm introduces more hyper-parameters, so I wonder do you have any comments on hyperparameter search (e.g. do existing algorithms also require tuning extra hyper-parameters?). Do you have any reason why it needs a larger mini-batch and a smaller learning rate to update $\\\\alpha$?\", \"##### Minor comments\", \"$\\\\theta\\u2019$ in Equation (2) is not defined\", \"[1] Alekh Agarwal, Nan Jiang, Sham Kakade, and Wen Sun. Reinforcement Learning: Theory and Algorithms.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"no sound knowledge to rely on\", \"review\": \"Summary:\\nThe paper deals with offline aka batch RL for discrete actions. Three techniques ((i) behavior value estimation, (ii) ranking regularization, and (iii) reparametrization of the value function), which can be combined with each other, are presented. These techniques are compared with other methods in different experiments. Furthermore a new benchmark is being introduced. It is claimed that in this new benchmark, the new techniques outperform state-of-the-art methods. Furthermore it is claimed that the presented method \\u201ebehavior value estimation\\u201c, although it is only a one-step greedy optimization is typically already sufficient for dramatic gains.\", \"strong_points\": \"The abstract and the first part of the introduction (the first 1.5 pages) are very well written and the problems of offline RL are very well presented. Also very good is the consideration that the existence of a behavior policy is a restriction that does not apply to every given dataset, as expressed in the terms \\\"behavior policy(s)\\\" and \\\"coherent policy\\\". However, it is not specified in the text what exactly is meant by \\\"coherent policy\\\".\", \"weak_points\": \"The representation becomes increasingly unclear from page 2 onwards. None of the three techniques presented is sufficiently discussed and sufficiently tested. None of the statements is supported convincingly, although the paper already makes extensive use of references to the Appendix. There are 14 references in the main text to the Appendix and four to figures in the Appendix.\", \"recommendation\": \"In its current form, the experimental part of the paper is immature. A uniform structure is missing. The statements are not sufficiently substantiated. Therefore I recommend to reject the paper. It seems that there is not enough space to present and sufficiently verify all three techniques.\\n\\nThe claim \\\"this one step is typically sufficient for dramatic gains as we show in our experiments (see for example Fig. 9)\\\" is not sufficiently substantiated, because one cannot speak of \\\"typically\\\", as only \\\"Atari online policy selection games\\\" are considered. And additionally Fig. 9 is located in the Appendix.\\nThe meaning of the error bars in Fig. 3 and Fig. 7 is not explained.\\nThe (on first sight) counterintuitive result that the performance of CQL and QRr at cart-pole is higher at 40% noise than at 0% must be explained in the text or caption.\\nBecause the presented ideas are not sufficiently examined and supported, no sound knowledge is generated on which the reader can rely.\", \"questions\": \"What is the meaning of error bars in Fig. 3 and Fig. 7?\\nWhat is meant by \\\"BC\\\"?\", \"additional_feedback_with_the_aim_to_improve_the_paper\": \"The abbreviation BC is not introduced. It is unclear what is meant by it.\\nIt is unclear what is meant by \\\"coherent policy\\u201c.\\nWhat is meant by \\\"discrete offline RL algorithms\\u201c?\\nit is not clearly described that discrete actions are required.\\nIn the sum, i runs from 0 to 100 and is divided by 100, but from 0 to 100 we count 101 (unfortunately there are no line numbers in the manuscript, to locate the sum).\\n\\\"5e - 2\\\" does not look nice, better is 0.05 or $5 \\\\cdot 10^{-2}$.\\nIn Figure 8, the measurement results should not be connected by lines. Lines should only be used for fits or predictions of theory.\\nIn Appendix F the text width is not respected.\\t\\nPlease do not use \\\\pm for the standard deviation, but for the specification of the uncertainty (aka error of the measurement) e.g. the standard error.\\n\\n-----------------------------------------\\n(Dec 3.) Although I appreciate the feedback, my assessment of the paper remains unchanged.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
vttv9ADGuWF | Certified robustness against physically-realizable patch attack via randomized cropping | [
"Wan-Yi Lin",
"Fatemeh Sheikholeslami",
"jinghao shi",
"Leslie Rice",
"J Zico Kolter"
] | This paper studies a certifiable defense against adversarial patch attacks on image classification. Our approach classifies random crops from the original image independently and the original image is classified as the vote over these crops. This process minimizes changes to the training process, as only the crop classification model needs to be trained, and can be trained in a standard manner without explicit adversarial training. Leveraging the fact that a patch attack can only influence some pixels of the image, we derive certified robustness bounds on the resulting classification. Our method is particularly effective when realistic physical transformations are applied to the adversarial patch, such as affine transformations. Such transformations occur naturally when an adversarial patch is physically introduced to a scene. Our method improves upon the current state of the art in defending against patch attacks on CIFAR10 and ImageNet, both in terms of certified accuracy and inference time. | [
"adversarial machine learning",
"certifiable defense",
"patch attack"
] | Reject | https://openreview.net/pdf?id=vttv9ADGuWF | https://openreview.net/forum?id=vttv9ADGuWF | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"HprP3Cit20r",
"PXvrmhG5MZ_",
"CJYhMpM-tix",
"2UfUt8xrZtn",
"GTFigNVUns",
"LMoxIS39q5k",
"fxO6iTYgR2",
"a6Fvd-HxG4Z",
"g74BP57QAVF"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040385888,
1606263430314,
1606261883965,
1606261470448,
1606261235495,
1604355794644,
1603897139515,
1603744742195,
1603463151929
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3584/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3584/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3584/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3584/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3584/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3584/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3584/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3584/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper provides a simple prediction procedure to defend against (rectangular) patch attacks, and also a method to obtain some random estimates of the certified robustness of the method. The simplicity of the method is certainly appreciated. On the other hand, there are a number of issues preventing the acceptance of this paper. The main problem is that the paper deals with a randomized predictor, yet the certification guarantee developed for deterministic predictors is applied. This leads to several problems, starting from the target being undefined to unfair comparisons. While the authors made an attempt to address this in the rebuttal, more work is needed to properly settle this issue.\"}",
"{\"title\": \"We revised the paper to improve readability, and ran experiments as suggested\", \"comment\": \"Thank you for the positive and helpful review. We have addressed the comments in the updated manuscript. Below please find responses for the questions:\\n\\nWhile adversarial training is a strong empirical defense against adversarial attack, it does not provide robustness guarantees, i.e., given a clean/unperturbed image, an adversarially-trained classifier cannot certify if adversarial attack can or cannot change the predicted class on this image. On the other hand, our method can certify if the predicted class of a given image can or cannot be changed under patch attack. We have considered adversarially-training $f_{\\\\theta}$, and it does improve empirical performance but decreases certification probability. This is because certification probability increases as the number of crops from \\\\textit{clean} image being classified correctly increases, but adversarial training decreases performance on clean samples and consequently decreases certification probability. \\n\\nA deterministic crop selection covering the whole image was considered, but we found that random selection is easier to select the number of crops, while having comparable certified accuracy.\\n\\nWe list the percentage of images that cannot be certified but can be successfully attacked by image-specific patches in the below table with the corresponding overall accuracy. We train our image-specific patch by training an adversarial patch which is placed at the center of the image for each image. We train each patch for 100 steps with step size 0.02. Please note that although some images are still correctly classified under image-specific patch attack, there can still exist a patch that can change its classification output. Due to time constraints, we only show CIFAR with patch size 2.4\\\\% and ImageNet with patch size 2\\\\% with no patch transformation.\\n\\n___\\n CIFAR10 2.4\\\\% patch || ImageNet 2\\\\% patch\\n___\\n| ours | DRS | PG+DRS / PG+Bagnets || ours | DRS | PG+DRS / PG+Bagnets\\n___\\nattack success rate | 83.2 | 94.3 |92.1 / 87.3 ||78.5 | 84.9 |83.2 / 79.3 \\nclassification accuracy| 60.3 | 59.5 |61.4 / 40.4 ||34.4 | 27.0 |30.1 / 31.2\"}",
"{\"title\": \"We added confidence level experiment in Appendix D\", \"comment\": \"Thank you for the helpful review. We have made corrections to Eq. 6 and added experiments and discussion of confidence level in Appendix D.\\n\\nCertified accuracy listed in Table 1. (and Table 3) are when the adversarial path undergoes affine transformation, while adversarial patches in Table 2 are all square with edges align with image coordinate axes. For example, a 2.4\\\\% patch for CIFAR10 is always a 5x5 square and its edges align with image coordinates, i.e., edges of the patch are parallel to image edges for Table 1; a 2.4\\\\% patch for CIFAR10 can be a 2x12 or 3x8 rectangle for Table 2.\"}",
"{\"title\": \"We have added ablation study of $p_c$ and revised the manuscript for improved readability\", \"comment\": \"Thank you for the helpful review. We've added training time to the first paragraph of Section 4. Clean accuracy are listed in Table 1 and a discussion on clean accuracy is added to Section 4.1. The ablation study of $p_c$ is added to Table 3.\\n\\nWe respectfully disagree with the reviewer that our method has only marginal improvement over PG+DRS -- without patch transformation, we achieve comparable certification accuracy as PG+DRS while the inference time is more than a magnitude faster; with patch transformation, our worst-case certification accuracy is 4 times better than PG+DRS. This comes from our basic idea of using crops instead of ablating regions of an image.\"}",
"{\"title\": \"We've added suggested analysis and extra experiments in Appendix, and revised the manuscript for improved readability\", \"comment\": \"Thank you for the thorough and helpful review. We've added a brief overview of De-randomized smoothing and PatchGuard in Appendix A, pseudo code explaining how certification probability is computed in Appendix B, and derivations and experimental results for uniform sampling without replacement in Appendix C. We've also expanded discussion on how certification rate is reported at test time in the \\\"metric\\\" paragraph in Section 4 and the effect of crop size vs patch size in Section 4.4. We analyze all three components of $f_{\\\\theta}$ in the \\\"Training randomized cropping classifier\\\" paragraph of Section 3.1 and show that $f_{\\\\theta}$ has the same set of trainable parameters as $g_{\\\\theta}$. We had several passes of revising the manuscript to improve the readability.\"}",
"{\"title\": \"An elegant defense. But text and discussion must be improved.\", \"review\": \"### Summary\\n\\nThe paper proposes a robust neural network architecture to defend against\\nadversarial patch attacks --where the attacker can freely modify a small patch\\nof size $s_{patch}$ of the input images-- by training the net on random\\ncrops of a pre-defined size $s_{crop}$ (in general $\\\\neq s_{patch}$), and, at\\ntest time, use majority voting over several such random crops for every image.\\nThey show that, as long as $s_{patch}$ is small ($\\\\leq 2.4\\\\%$ of total image),\\nthese architectures achieve usual and **certified** robust accuracies that are\\ncompetitive with the SOTA among the techniques and architectures that were\\ntailored to certifiably defend against such attacks.\\n\\n\\n### Overall evaluation\\n\\nThe defense is elegant for its simplicity and seems efficient, both in terms of\\nperformance for small attack patches (which, arguably, is the most interesting\\ncase), and in terms of computational speed/complexity (for small enough sample\\nsize of crops). But the paper is still unclear, vague and/or too\\napproximate in some parts (see points below). It seems to have been written in\\na rush (Fig.2 on the right is obviously not the one intended) and some\\nadditional experiments and/or discussions would be useful to complement the\\nevaluation of the current method (see point 8. below). So, overall, I don't\\nrecommend publication in the current state but am inclined to reconsider when I\\nwill see appropriate changes in the paper.\\n\\n\\n### Detailed remarks/questions\\n\\n1. Since you repeatedly refer to de-randomized smoothing and PatchGuard in the\\n text (and their combination), I suggest adding a short self-contained\\n description (in appendix with a reference) at the beginning (or in appendix\\n with a reference at the beginning) for the non-informed readers.\\n2. I suggest to clearly mention that what is certified is the robust **test**\\n accuracy, not the distributional robust accuracy.\\n3. I think that you are sampling the crops **uniformly** at random **with**\\n replacement, but I don't recall reading this explicitly in the text. Anyway,\\n it would be good to compare both approaches (with and without replacement),\\n at least in appendix.\\n4. The description of Table 1 is in Sec.4.1 with title \\\"Without patch\\n transformation\\\", yet the caption says (and Table 2 confirms) that it shows\\n results **with** patch transformations. Which one is right? And if it's with\\n patch transformation, please add the same table for the setting without patch\\n transformation (at least to appendix).\\n5. Eq. 6: capital $N$ in $C^N_i$ should be $n$.\\n6. Please do an additional pass of proof-reading\\n7. Sec 3.2: the description of how to compute the certifiable robustness bound\\n could be **significantly** improved and seems to be written in a big rush. In\\n particular, you never explicitly say which formula is used to compute the\\n robust bounds.\\n8. _Crop size vs patch size_: The trade-off between crop size and attack patch\\n size is discussed in a rush at the end of the experiments (Sec. 4.2): it\\n deserves attention and an intuition explanation of the trade-off much earlier\\n in the paper, f.ex. when explaining the overall attack. Also, the paper could\\n benefit a lot from a finer analysis of the dependence of the optimal crop-size\\n on the attack's patch size (and probably on the typical size of \\\"relevant\\n information\\\" in the images). This could be either empirical (studying the\\n optimal crop-size as a function of the patch size) or more theoretical (with an\\n image model, or by using the average size of relevant objects in the image).\\n Also, it could shed more light on why your attack works better on ImageNet than\\n on CIFAR10 (probably because the typical size of the relevant parts of the\\n images are different in both datasets).\\n9. _Sec. 3.1 \\u00a7Training randomized cropping classifier_: the sentence \\\"the only\\n trainable part of the randomized cropping classifier is the crop\\n classifier\\\" deserves a proper Proposition/Theorem (with precise and explicit\\n hypotheses, s.a., f.ex. the assumption that crops are sampled **uniformly**\\n at random with replacement) and a proper, well-delimited proof. \\n\\n------------------------------------\\n\\nUpdate after rebuttal\\n-----------------------------\\n\\nThe updated version is clearly better than the first one. However, the concerns\\nof the other reviewers regarding the randomness of $p_c$ (and therefore of the\\ncertification method) have convinced me that there are, indeed, some further\\nclarifications and discussions needed prior to the publication, which is why I\\nwill keep my initial recommendation.\\n\\nTo be more precise, the root issue here seems to be that the proposed\\nclassifier is not deterministic, which means that the standard definitions of\\nadversarial examples and adversarial accuracy do not apply and therefore, that\\nthe problem that you try to solve is unclear and/or not well defined. In\", \"particular\": \"what is it that gets certified? what does it mean to get\\ncertified? and, more generally, is the word \\\"certified\\\" really appropriate in\\nthis context?\\n\\nHowever, whether an analysis of the distribution of $n_{2to1}$ and $p_c$ will\\nbe needed (as asked by other reviewers) might depend on how the authors will\\ndefine adversarial vulnerability in the random setting, and what they try to\\ncertify. Let me explain what I mean.\\n\\nA reasonable start might be to define adversarial risk as\\n$$\\n E_{(x,y)} E_{\\\\phi} \\\\mathcal{L}(\\\\phi(x), y) \\\\ , \\\\tag{1}\\n$$\\nwhich is the usual definition, but with an additional expectation over the\\nvariability of the classifier $\\\\phi$. Adversarial accuracy would then be the\\nadversarial risk for the 0-1-loss $\\\\mathcal{L}_{0-1}$. Then the authors could, f.ex., set as goal\\nto construct a (provably) unbiased estimate of this quantity.\\n\\nThe advantage of such a method is that one doesn't forget the fact that, what\\nwe actually want to certify is this \\\"distributional\\\" robustness (i.e. where\\nexpectation is taken over the true underlying, unknown distribution), not the\\nrobustness on the test set. Even methods that have a non-random certification\\nprocess (so-called \\\"provable robustness guarantees\\\") will never be able to\", \"certify_this_quantity\": \"they'll only deliver certificates on test example. The\\n\\\"certified robustness on the test set\\\" that they yield is also just a random\\nvariable which we hope \\\"generalizes to\\\" (1). Reviewers almost never ask authors\\nto analyze/certify this generalization gap. Similarly, here, one could see the\\nrandomness over $n_{2to1}$ and $p_c$ as just another source of randomness\\ncontributing to the variability of the generalization gap, in which case, maybe\\nno rigorous analysis could be acceptable, as long as it is clear what the\\nauthors want to certify (unbiasedness of the estimator, f.ex.). Therefore,\\nwhether this source of randomness could or should be explicitly captured/used\\nby the authors' method is, I think, a question of how the authors frame the\\nproblem and their goal.\\n\\n### Minor points:\\n\\n- even the revised version still contains quite a few grammatical errors,\\n especially in the new/re-worked sections, where many articles (\\\"the\\\", \\\"a\\\")\\n are missing.\\n\\n- End of p.5, \\\"to maximize number of certified robust images, the randomized\\n cropping classifier should maximize n2to1, which is equivalent to maximizing\\n classification accuracy of $g_\\\\theta$\\\": not sure about this equivalence.\\n Maximizing the classification accuracy is equivalent to maximizing n1, not\\n necessarily n1-n2.\\n\\n- are DRS and PG also probabilistic certifications (i.e. certifying robustness\\n with some probability, f.ex. pc>.95)? This should be clearly said in the text\\n and the captions, especially since it would make the comparison a bit unfair\\n if their certification were 100% sure. (This issue is obviously related to my\\n previous major remark on randomness.)\\n\\n- the name \\\"worst-case certified accuracy\\\" in the caption of Table 1 is very\\n unclear at that point. It becomes clear in Sec. 4.3, but you refer to Table 1\\n in Sec. 4.1 already. So this term should be clearly explained in the caption,\\n or there should be a clear reference to the relevant part in the text.\\n\\n- don't always re-cite Levine&Feizi and Xiang et al. every time you mention\\n de-randomized smoothing and patch guard. Cite them the first time, and then\\n say that you'll refer to de-randomized smoothing and patch guard as DRS and\\n PG in the rest of the text.\\n\\n- in the conclusion: \\\"This paper proposes a new architecture for defense\\n against\\\" -> \\\"This paper proposes a new defense against\\\". (You are not really\\n proposing a new architecture.)\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"The paper proposed a simple way to defense adversarial patch attacks.\", \"review\": \"The method can be basically summarized as a majority voting of crops of an image. Moreover, a new certification of the proposed method is introduced, not similar to the conventional adversarial robustness certification on perturbation under $\\\\ell_p$ ball , the method using simple geometry and probability problem to certify the results instead of any relaxation of the neural networks.\\nHowever, the paper is poorly written, and many experimental setups are missing.\", \"update_after_rebuttal\": \"After reading the rebuttal, I don't think my questions are addressed very well, especially about the confidence probability, $p_c$. The certification is defined as a guaranteed yes/no problem but the $p_c$ will relax the certification to a probabilistic problem. Also, PatchGuard with patch transformation is out of scope for the original paper, so I think the experimental results in Section 4.1 and 4.2 are more like a fair comparison. However, refer to the results in table 3, the proposed method yields worse performance than PG-DRS although the computational cost is saved. Hence, I will keep my rating.\", \"pros\": \"+The proposed method is quite general and intuitive.\\n\\n+Using a simple geometry and probability model, the proposed method can certify the robustness of the adversarial patch attack in a very efficient way.\", \"cons\": \"-What's the training time of the crop classifier $g$ compare with other baselines?\\n\\n-The whole system highly depends on the test accuracy of $g$. What's the result if no attack involved in?\\n\\n-From the experimental results, the proposed method has margin improvement compare with PG+DRS, and the basic idea of the certification algorithm also follows PG+DRS. So, the contribution is not enough for ICLR.\\n\\n-The hyperparameter $p_c$ is set as 0.95 in the experiment. However, the certification needs guaranteed results. So, if the $p_c = 0.95$, is that means there is a 5% possibility that this image can be attacked? Although this may not change the results so much, the ablation study of this parameter setting should be performed to make the comparison fairly.\\n\\n-Figure 2 (right) seems to use the wrong plot.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"Update after the rebuttal:\\n\\nI read other reviews and response from the authors and I decided to keep my score. Overall, I think paper still needs more\\nwork. For example, incorporating details on confidence into the main paper and not just a section in the appendix is quite\\nimportant as otherwise the paper is misleading.\\n\\n==================================================\\n\\n-> Summary: \\nIn this paper, authors propose a new certified defense against adversarial patches. They propose a model\\nwhich samples different patches from the original image, performs classification of these patches using neural network classifier and performs a majority vote to compute the output label. The guarantee is obtained by computing a probability that none of the sampled patches intersect adversarial patch.\\n\\n-> Reasons for score:\\n\\nI vote for rejecting this paper. The main issue I have is that their probabilistic guarantees lack confidence intervals and \\ntherefore it is not clear how meaningful they are. Further issue is that the method works better than prior work only if \\ncertain image transformations are applied, such as rotation. Yet, this critical part is not described formally: what is rotated, entire image or just a patch? Same holds for aspect ratio. Due to these problems, I cannot recommend acceptance for this paper.\\n\\n-> Pros:\\n\\nI think explanations of the method are easy to follow and writing is solid, with some parts that require more clarifications.\\n\\n-> Cons:\\n\\nThe biggest problem I have with this paper is that authors do not consider confidence intervals for their guarantees. In particular, probability p_c computed in Equation 6 is tied to the particular n patches that were sampled which determines the value of n_{2to1}. This means that n_{2to1} is random variable and p_c that authors compute holds for just one sampled value of n_{2to1}. So for this bound to be useful, we need some form of confidence interval. For example, guarantee could be that with 99% probability p_c is interval [p_c_low, p_c_high]. And then, only if p_c_low is greater than some probability threshold (which is set to 95% in Section 4), we can report that this sample is certified. If I misunderstood something, it would be great if authors can clarify what kind of guarantee is provided.\\n\\nAdditionally, I find method used here relatively trivial and I think more technical contributions are necessary (e.g. computing confidence intervals above). In terms of empirical results, it seems that this method works better than prior work only in the case of additional image transformations and otherwise it performs worse or same. As this is the critical thing, I think authors should provide more explanations on how are these transformations applied, more formally instead of just describing it.\\n\\n\\n-> Questions:\\n\\n- In Equation 6, summation is over i in the set {0, n_{2to1}}. Should this summation be over 0 <= i < n_{2to1} instead?\\n- Can you clarify what is the difference between the experiments in Table 1 and Table 2? If I understand correctly, the experiments in Table 1 have additional transformation that is applied besides the adversarial patch? \\n- Can you write mathematical formulation of transformations in Section 4.2? I am not sure whether these transformations are applied over the entire image or only over the patch, so it would be good to write more formally what exactly is the transformation.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Surprisingly simple approach to an important problem\", \"review\": \"The authors propose a surprisingly simple statistical defense, that can certify the robustness of a classifier against patch attacks. This is achieved by randomly sampling small rectangular subregions of the perturbed image and classify these samples individually.\\n\\nThe paper is mostly well written. Certification methods, able to handle patch attacks, are of significant interest in real world applications like autonomous driving. The paper is charming because of the simplicity. However, some questions remain unanswered and the experimental evaluation is not very thorough, hence this paper is borderline.\", \"questions\": [\"Have the authors considered some kind of adversarial training?\", \"As far as i understood the paper, the certification method is probabilistic due to the probabilistic intersection of sampled subregions with the adversarial patch. Could this be circumvented by defining a fixed set of subframes that get classified? Then the number of overlapping subregions could be calculated precisely in advance and the certification would not be probabilistic.\", \"How does the right figure in Figure 2 look like? The current one seems to be the wrong one.\", \"In Table 1, Table 2 and Table 3, the authors provide certification rates. Could the authors also provide how many images of the not certified ones can be successfully attacked?\", \"When is $p_c$ as in Equation 6 considered to be close enough to 1, is it $0.95$ as in Section 4?\", \"How do the certification rates change for larger/smaller adversarial patches?\"], \"comments\": \"- $C_i^N$ in Equation 6 should be defined. \\n- A vertical line between the CIFAR10 and the ImageNet results in Table 2 & 3 would improve readability.\\n\\nI am willing to increase my score if my questions and concerns are addressed. \\n\\n-------------------\\n\\nAfter reading the authors response, i think that this work would benefit from an experimental comparison of their random method against a method relying on a deterministic crop selection, even if the certification rates of the deterministic method are inferior, because the certificates both methods are yielding are different: The resulting certificates from the deterministic method would be deterministic instead of probabilistic. Hence i will retain my score.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
Hr-cI3LMKb8 | Leveraging affinity cycle consistency to isolate factors of variation in learned representations | [
"Kieran A Murphy",
"Varun Jampani",
"Srikumar Ramalingam",
"Ameesh Makadia"
] | Identifying the dominant factors of variation across a dataset is a central goal of representation learning. Generative approaches lead to descriptions that are rich enough to recreate the data, but often only a partial description is needed to complete downstream tasks or to gain insights about the dataset. In this work, we operate in the setting where limited information is known about the data in the form of groupings, or set membership, and the task is to learn representations which isolate the factors of variation that are common across the groupings. Our key insight is the use of affinity cycle consistency (ACC) between the learned embeddings of images belonging to different sets. In contrast to prior work, we demonstrate that ACC can be applied with significantly fewer constraints on the factors of variation, across a remarkably broad range of settings, and without any supervision for half of the data. By curating datasets from Shapes3D, we quantify the effectiveness of ACC through mutual information between the learned representations and the known generative factors. In addition, we demonstrate the applicability of ACC to the tasks of digit style isolation and synthetic-to-real object pose transfer and compare to generative approaches utilizing the same supervision. | [
"variation",
"factors",
"acc",
"affinity cycle consistency",
"learned representations",
"data",
"dataset",
"generative approaches",
"groupings",
"supervision"
] | Reject | https://openreview.net/pdf?id=Hr-cI3LMKb8 | https://openreview.net/forum?id=Hr-cI3LMKb8 | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"zdEeV_fdD2L",
"Fsns8fr8kd",
"1aAMOlelu85",
"UYAM05-XeD",
"iymodef4ZVy",
"R0wsaF1dV7b",
"wkTJxDFf4xj",
"NhQrXwDlGj",
"rM6dVnHw5em",
"0k5-qk4Wz54",
"lLAI4msmLGO",
"NmLlHwc88Hh",
"gtxoHhtx0a-",
"THL-OaEF_kZ"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040459925,
1606152549529,
1606138636198,
1605916104240,
1605915978463,
1605915670404,
1605915615222,
1605915005949,
1605914672020,
1605143018546,
1603892193721,
1603877902274,
1603804078817,
1603707061775
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3582/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3582/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3582/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3582/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3582/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3582/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3582/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3582/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3582/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3582/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3582/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3582/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3582/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper proposes to employ affinity cycle consistency(ACC) for extracting active (or shared) factors of variation across groups. Experiments shows how ACC works in various scenarios.\", \"pros\": [\"The problem is important and relevant.\", \"The paper is well written.\", \"The proposed method is simple and effective.\"], \"cons\": \"- The experimental section is weak:\\n It lacks an ablation to validate the contribution of ACC and discussion on\\n why the method works and the scalability of the proposed method to more complex cases.\\n- The novelty is limited because the proposed ACC is similar to previous work temporal cycle consistency(TCC).\\n- The paper missed some implementation details and could be difficult to reproduce without code\\n provided.\\n\\nReviewers raised the concerns listed in Cons. The authors conducted additional experiments and added more discussions on the experimental results in the revised paper. The authors also explained that ACC is more general than TCC. However, the reviewers were not convinced by the rebuttal and kept their original ratings.\\n\\nDue to the two main weaknesses -- limited novelty and weak experimental analysis, I recommend reject.\"}",
"{\"title\": \"Marginal independence is the goal, not a constraint\", \"comment\": \"Thank you for the insightful question. No, we do not assume an independence constraint between different factors of variation.\\n \\nInstead of being an implicit constraint, marginally independent factors are the factors of variation sought by the loss. A simple example from the MNIST dataset: a factor of variation which exists only for 2s is whether the bottom has a loop or not. This factor of variation is dependent on the inactive factor (digit class), and would help with distinguishing from among a set of images of the digit 2, but does not help in finding correspondence with a set of images of the digit 3. \\n\\nFor the pose estimation experiments, there are many factors of variation dependent on the specific car model, e.g. the pixel distance between the two headlights in the image. Some of these factors may be the start to finding pose but their dependence on the inactive factor leads to suboptimal performance with respect to the ACC loss. Pose is a higher level factor of variation which can be used throughout training for the correspondence task precisely due to its marginal independence.\\n\\nThe simplicity of the Shapes3D dataset allows us to fully enumerate and factorize the factors of variation, making the analysis easier but perhaps giving the impression that the method requires such cleanly separable factors of variation.\"}",
"{\"title\": \"Further question\", \"comment\": \"Thank you for answering the questions and updating the submission. I have a question regarding the following sentence.\\n \\n\\u201cIn order to train ACC with a particular generative factor inactive, for each training step we randomly sample from among its possible values and hold it fixed across a set of inputs, while sampling uniformly across the remaining factors to generate a set of size 32.\\u201d\\n \\nIt seems in this setting, the distribution of other factors is not influenced by the selected value of the inactive factor. If so, this means, statistically, the inactive factor is marginally independent of other factors: P(inactive, other) = P(inactive) P(other). Is this an implicit constraint of the algorithm, in addition to the set membership?\"}",
"{\"title\": \"Emphasizing the generality of the method\", \"comment\": \"Reviewer 4 is concerned about the \\u2018innovative contribution\\u2019 of our work, with the essence of the criticism contained the following sentence:\\n\\n*The only difference is: this work is built based on affinity relation between pairs of images with similar postures, while the original idea was applied for temporal sequence alignment in video processing.*\\n\\nWe respectfully disagree.\\n\\nFirst, the central premise of the paper is the generality of the method, with pose estimation one of three use cases intended to showcase its breadth of applicability. The pose estimation of Sec. 4.3 serves to elucidate novel aspects of ACC -- such as the ability to have an unconstrained second set -- and ground its representation-learning capabilities in a challenging and realistic problem setting. The paper, as a whole, is about elevating a narrow-purpose method to temporally align video sequences to a significantly broader range of applications. We demonstrate that ACC is a powerful discriminative approach to interrogating datasets, and take important steps to understanding its capabilities, limitations, and realm of application. \\n\\nSecond, we introduce multiple expansions to the original method of Dwibedi et al. which further serve to generalize the method. We show that one of the two input sets during training can be completely unconstrained, which allows us to incorporate unannotated, out-of-domain images during the pose estimation task. In a set of pose estimation experiments added in the resubmission to Sec. 4.3, we demonstrate that the ability to incorporate unannotated real images significantly improves pose regression on real images over the spherical regression framework of Liao et al. (2019). Additionally, we modify the cycle consistency loss to allow extra control over nuisance factors of variation by way of double augmentation, and measure the efficacy in Fig. 3c.\\n\\nUltimately, the modifications to the original method of Dwibedi et al. further serve to generalize it and open the door for a far wider range of applications than aligning videos.\"}",
"{\"title\": \"Expanded analysis and discussion around factor isolation\", \"comment\": \"We thank the reviewer for the time taken to carefully read our paper and have followed the suggestion to make a comparison to the related method of Bouchacourt et al. (2018).\\n\\n*I believe the paper will make a far more compelling case if there are analysis experiments presenting the strengths of the approach that provides insights into why certain factors are easily isolated and others are not.*\", \"we_have_focused_much_of_the_added_material_to_shed_more_light_on_factor_isolation\": [\"Revised Discussion\", \"Appx. B: Shapes3D factors embedded in higher dimensions\", \"Appx. C: More constrained settings for Shapes3D, getting at the \\u2018tougher\\u2019 geometric factors\", \"Appx. D: Why a single factor of variation isn\\u2019t enough for the ACC loss\", \"*Experiments of Fig. 3... Why are the other factors not isolated?*\", \"Solving the correspondence task by way of the ACC loss does not require extracting every factor of variation. The factors of variation differ in salience with respect to the embedding network, as shown by the mutual information results of Fig. 3 where the hue factors are consistently more easily extracted.\", \"Added to the Discussion (P9): \\u201cWhile isolating multiple active factors may lower the ACC loss on average, factors differ in salience. The hue-related generative factors of Shapes3D appear easier for the specific network to identify, so once a correspondence utilizing these factors is found, training effectively ceases. Similarly, nuisance factors of variation in the images of cars and chairs are easier for a network to identify than camera pose, which is why double augmentation helped to encourage the network to isolate pose.\\u201d\", \"Added numerical results to Appx. D which show the ACC loss decreases by finding more independent factors of variation but depends on the size of the training sets.\", \"*It is not clear how the temperature value (as used in Definition 2) is set. How sensitive is the performance to this parameter.*\", \"The temperature sets the scale of lengths in embedding space, meaning all interpoint distances will simply be expanded or contracted by the embedding network to match the temperature if using anything derived from Euclidean distance. Cosine similarity is bounded, however, making temperature a meaningful hyperparameter for the pose estimation experiments.\", \"Added to Appx. A, implementation specifics: \\u201cWe used squared L2 distance as the embedding space metric and a temperature of 1, though as long as the length scale set by the temperature is larger than the initial point spread from the randomly initialized network, it does not seem to matter.\\u201d\", \"*How is the dimensionality of the embedding space determined? How does it effect the isolation / disentangling performance? \\u2026 Could this be a reason for the other factors not being isolated in Fig. 3?*\", \"The Shapes3D experiments of Fig. 3 used two dimensions so that mutual information could be reliably measured, but we show the same behavior (though probed differently) occurs in 64-dimensional embedding space (Appx. B). We find that higher dimensions help with training even though the learned embedding distribution is generally relatively low dimensional as measured by PCA.\", \"Added reproduction of Fig. 3 experiments in higher dimensions, Appx. B\", \"Added dimensionality ablative study for pose estimation, Appx. E.\", \"*Comparisons with group level supervision work, e.g. (Bouchacourt et al., 2018)*\", \"Implemented ML-VAE (Bouchacourt 2018) referring to the code posted at https://github.com/DianeBouchacourt/multi-level-vae and added the results to Tab. 1 (pose estimation) and Appx. F (digit retrieval).\", \"Added to Sec. 4 (P7): \\u201cWe compare to the representations yielded by two VAE-based approaches which utilize grouped data to separate factors of variation: CC-VAE (Jha et al., 2018) in Fig. 4 and ML-VAE (Bouchacourt et al., 2018) in Appx. F.\\u201d\", \"Added to Sec. 4 (P8): \\u201cThe significant difference between ACC and the generative approaches underscores the importance of meaningfully incorporating unannotated real images during training; there is no simple means to do so with either VAE-based method.\\u201d\", \"*There are other mistakes, possibly typographical in nature.*\", \"Corrected the two that were raised and checked over the rest of the paper.\"]}",
"{\"title\": \"Expanded discussion and provided additional control experiments (continued)\", \"comment\": [\"*The method is simple but reproducing would probably need some additional implementation details (architectures, optimization method, hyper-parameters...)*\", \"Added implementation specifics to Appx. A\", \"Including iPython notebook with code to run the MNIST digit style isolation\", \"We plan to post the code for the Shapes3D and pose estimation experiments after cleaning it up.\", \"*While the active/inactive features formalism is clear, it still might help to illustrate with examples how some classical tasks (for instance in domain transfer) fit in.*\", \"We have tried to elucidate the distinction between active and inactive factors in more settings.\", \"Added to Discussion (P9): \\u201cA correspondence can be made when each element is embedded according to only a single active factor of variation common to both sets. This was the case for Dwibedi (2019), where the progression of an action (e.g., bowling) was the only active factor of variation (with scene specifics being inactive, fixed per video).\\u201d\"]}",
"{\"title\": \"Expanded discussion and provided additional control experiments\", \"comment\": \"We thank the reviewer for their feedback following a close reading of our paper, and address raised points below.\\n\\n*... very similar to the previously introduced Temporal Cycle Consistency.*\\n\\nWhereas the aim of Temporal Cycle Consistency was to introduce a method to align video sequences of a given action, it is our express goal to show the generality of the method and bring it into the active research direction of optimally leveraging weak supervision to extract useful representations. We develop the language and framework necessary to cast the methodology of Dwibedi 2019 into a far broader range of scenarios. \\n\\nIn addition to the contribution of generalization, we introduce two modifications to the method which further serve its generalization and dissemination. Unannotated data from a similar distribution can be incorporated, and the effect it has on the pose estimation task in Tab. 1 (the difference between ACC and the VAE-based approaches) is enormous. We also introduce the double augmentation scheme to serve as another tool for operating on factors of variation in a dataset. \\n\\n*The paper advocates for the use of ACC for representation learning but doesn't provide an explanation about why the method should work.*\\n\\nWe have focused much of the additional material in the revision on providing intuition about why and how ACC works. \\n- Rewritten Discussion Sec. \\n- Added Appx. B: repeats the Shapes3D experiments in higher dimensions\\n- Added Appx. D: employs Monte Carlo numerics to show why multiple factors of variation are extracted when only one should be needed to find correspondence \\n- Included GIF of the training evolution of MNIST, which helps visualize ACC (supplemental material)\\n\\n*While the experimental results are intriguing, they aren't very convincing in terms of the usefulness of the proposed method\\u2026. raises concerns about how well the method can generalize to other problems or scale to more difficult images.*\\n\\nWe have included an additional pose estimation experiment (Tab. 2 in Sec. 4.3) which further showcases the ability of ACC to isolate pose and enhance performance on a challenging real-world task. We add a spherical regression head (using the framework of Liao et al. 2019) on top of the ACC embedding space and operate in the data setting where pose annotations are available for synthetic images but not real, with the test set from Pascal3D+. \\n\\nThe conclusion is that ACC allows the incorporation of unannotated real images which significantly improves the accuracy compared to the baseline of regression trained solely on synthetic images.\\n- The median angular error drops from 12.3 to 9.3 deg for Pascal3d+ cars, and from 30.8 down to 26.0 deg for chairs. (Fig. 6, Tab. 2).\\n\\n*Results in Sec. 4.3 are obtained using a 2-dimensional latent space, which is useful for visualization but is a very odd choice for practical uses.*\\n\\nThe 2-dimensional latent space was chosen to facilitate the mutual information measurement. We reproduce the results in higher dimensional embedding spaces in Appx. B.\\n- Added Appx. B: repeats the Shapes3D experiments in higher dimensions\\n- Added to the Discussion (P9): \\u201c[The factor isolation behavior of Fig 3] can be partly attributed to the low dimensionality of the embeddings -- a design choice to allow the measurement of mutual information, which is notoriously problematic in higher dimensions -- though we show in Appx. B that the effect is also present for 64-dimensional embeddings.\\u201d\\n\\n*the authors had to use a complex pipeline for these relatively simple images\\u2026*\\n\\nWe assume that the complex pipeline referred to was, at least in part, the nearest neighbor lookup used for the results of Tab. 1 in order to extract pose information. This was by design: this scenario was meant to showcase the performance of ACC trained without a single ground truth annotation for the quantity of interest. \\n\\n*Can the authors comment on the importance of the size of the embeddings? \\u2026 What happens when using larger embedding?*\\n\\nWe used two dimensions for the Shapes3D experiments in order to facilitate the mutual information measurements. In experiments added to Appx. B, we probe 4, 16, and 64 dimensional embeddings by using a simple FF network to classify each of the six generative factors given the embedding as input. In higher dimensions, factor of variation isolation is subtly different, and connected to length scales over which the information is encoded.\\n- Added Appx. B: reproduction of Fig 3 experiments in with higher-dimensional embedding space \\n- Added Appx. E: ablation studies for the pose regression.\\n\\n*Can the author confirm that phi:A->L and phi:B->M are implemented as two different convolutional networks?*\\n\\nNo, there is only one network (shared weights) which does the embedding for both input sets.\\n- Added to Methods (P4): \\u201cFunctionally, we parameterize \\\\phi with the same neural network for both [input sets] A and B.\\u201d\"}",
"{\"title\": \"More material about factor of variation isolation by ACC, and reproducibility\", \"comment\": \"We are glad the Reviewer appreciates the problem as \\u201cdifficult and important\\u201d and thinks \\u201cprogress is good\\u201d. We follow the reviewer\\u2019s suggestions and include more material about what factors are isolated and why, as well as implementation details.\\n \\n*The experiments and discussions do not cover enough cases for extracting and suppressing factors.*\\n\\nWe have greatly expanded discussion and experiments around extraction and suppression of factors of variation. \\n- Rewrote Discussion Sec.\\n- Added Appx. B about what factor isolation means in higher dimensions by way of the Shapes3D experiments\\n- Added Appx. C with more constrained Shapes3D experiments (more inactive factors)\\n- Added Appx. D with Monte Carlo results that show why the task of finding correspondence is benefitted by multiple independent factors\\n\\n\\n*It would be more helpful to find how many elements are necessary from theoretical and / or empirical perspectives.*\\n\\nThis is a good observation -- that in order to optimize the loss and find correspondence between set A and set B, the size of the two sets should influence how many factors the representations extract. \\n- Added Appx. C with Monte Carlo results which show that using larger sets encourages finding more independent factors of variation.\\n\\n*Suppression of complicated factors\\u2026 When inactive, a factor takes a fixed value. So when it is inactive both in A and B, the factor has at most two values in training\\u2026*\\n\\nTo clarify, when a factor is inactive, it takes a fixed value only for a single input set. The next training sets resample the inactive factor anew from among its possible values, so over the course of training the inactive factors are well sampled. \\n- Clarified text of Sec. 4.1: \\u201cIn order to train ACC with a particular generative factor inactive, **for each training step** we randomly sample from among its possible values and hold it fixed across a set of inputs...\\u201d\\n\\n*It would be helpful to see what happens when the three hue factors and a geometric factor (e.g. orientation) are all inactive.*\\n\\nWe agree, and added a new Appx. with these results. Interestingly another instance arises where the factors of variation have different salience to the embedding network: while there are four possible shapes, the embeddings consistently split into only two groupings. Evidently distinguishing between the cylinder and cube, and the sphere and pill, respectively, are harder than distinguishing whether the top of the primitive is rounded or flat.\\n- Added Appx. C, which repeats the experiments of Fig. 3c but with all possible inactive factor combinations of three hue factors and one geometric factor. \\n\\n*The paper does not provide code to reproduce the results.*\\n\\n- Included iPython notebook to reproduce digit style isolation results on MNIST (supplemental material)\\n- Added Appx. A with implementation specifics for all experiments\\n- We plan to post the code for the Shapes3D and pose estimation experiments at a later date\\n\\n*The last Sec. is named \\u201cDiscussion\\u201d, but it looks like \\u201cConclusion\\u201d from its contents and position.*\\n\\nWe have added a new Discussion Sec. to consolidate various insights from the experiments, and retitled the prior Discussion to be the Conclusion.\\n\\n*Some figures and letters on them are very small.*\\n\\nWe revised Fig. 5 to improve its readability.\"}",
"{\"title\": \"More insight, more experimental backing\", \"comment\": \"We thank the reviewers for the detailed and constructive feedback. We appreciate the positive feedback that this is a \\u201cdifficult and important problem\\u201d (R1) to which we propose a \\u201csimple and effective method\\u201d (R2, R3); our results uncover \\u201cintriguing properties\\u201d worthy of further exploration (R2); and that the manuscript contains clear formalism (R2), is \\u201cwell written\\u201d (R4), and has \\u201cinformative related work\\u201d (R2). We address some of the common concerns below:\\n\\n**Insufficient analysis of the factors of variation that are extracted by ACC (R1, R2, R3):**\\n\\nAffinity cycle consistency isolates active factors of variation in learned representations, but generally not with respect to all of them. We devoted the majority of the original submission to demonstrating that ACC works in a wide variety of scenarios and developing a language for describing the performance. The next natural question which arises, after the framework has been established and the method is shown to work, is why: specifically, why some factors but not others? The fact that so many of the reviewers had this question is evidence that our case was successfully made. \\n\\nMost of the added material in the resubmission focuses on this topic, to leave the reader with more insight about how the factor of variation isolation works in practice. Please see the change list below with specifics.\\n\\n**Insufficient technical novelty (R4, R2):**\\n\\nTo clarify, the primary contribution of our paper is to elevate a narrow-purpose method for temporally aligning video sequences to a general framework for operating on factors of variation given generic data groupings. Beyond generalizing affinity cycle consistency and opening the door for myriad new use cases, we also introduce two impactful innovations: the incorporation of unannotated and possibly out of domain data, and greater control over factors of variation with the double augmentation modification to the loss. We think the sum total contribution of this work to the field of representation learning will be significant.\\n\\n**More convincing pose estimation results (R2):**\\n\\nThe main takeaway from the pose estimation experiments was how impactful it is that ACC can factor in unannotated real images and meaningfully connect them to the synthetic images for which there is set supervision. There is no analogous means of doing so for the comparable VAE-based approach, and the performance gap is dramatic. We have clarified this point in the text, and strengthened the claim by comparing to a second relevant VAE-approach (Bouchacourt 2018) as suggested by R3 (Tab. 1).\\n\\nWe also add to the manuscript pose estimation results in a more realistic setting, of pose regression supervised on synthetic pose annotations and where the ACC loss is optimized on an intermediate embedding space. The unique ability of ACC to incorporate unannotated data from a different domain again leads to a significant boost in accuracy (Tab. 2).\\n\\n**Did the 2D embedding space of the Shapes3D experiments affect performance? (R2, R3):**\\n\\nWe reproduced the Shapes3D experiments in higher dimensional embedding spaces and found the same qualitative behavior, though with some subtle differences (Appx. B).\\n\\n**Reproducibility (R1, R2):**\\n\\nWe add implementation specifics (Appx. A) and include in our resubmission code to reproduce the MNIST digit style isolation results. We also plan to release the code for the Shapes3D and pose estimation experiments.\\n\\n**List of changes:**\\n\\n- Rewritten Discussion \\n- Appx. A: Implementation details\\n- Appx. B: Shapes3D experiments in higher dimensional embedding space\\n- Appx. C: More inactive factors in the Shapes3D experiments\\n- Appx. D: Motivating why multiple independent factors of variation are isolated in each of the experiments, with Monte Carlo experiment\\n- Appx. E: Ablative studies for pose estimation problem\\n- Appx. F: Extended comparison between ACC and generative methods for MNIST experiment\\n- Tab. 2 with supervised pose regression improved by ACC and the incorporation of unannotated real images\\n- Fig. 6 with schematic for the pose regression experiment\\n- iPython notebook to reproduce the MNIST digit style isolation results and a GIF showing the evolution of embeddings during training (supplemental)\"}",
"{\"title\": \"Thank you reviewers\", \"comment\": \"We thank the reviewers for the time spent carefully reading our manuscript and for the suggestions which will greatly help us improve our paper. We are in the process of revising and will post an updated version of the paper in about a week, with responses to each review shortly thereafter.\"}",
"{\"title\": \"A feature learning method for grouped data. The method seems not supported enough by motivation nor results.\", \"review\": \"The submission proposes a method for representation learning in a setting where data is annotated by set membership (i.e. grouped data). More specifically, the authors aim to extract representations of the factors of variations that are shared across groups.\\nTo do so, the proposed method embeds different sets into a shared latent space using a learning objective called Affinity Cycle Consistency (ACC). ACC imposes a soft version of the cycle consistency on the nearest neighbourg relationship between the two learned sets of embeddings.\\nUsing this method, the authors show in experiments on 3DShapes and MNIST that the inactive features (features that fixed in each set) are removed from the embeddings that, in turn, better encodes for the active features (features that vary inside each set).\\nThey also show how their methods can be used to accurately estimate the pose of objects (cars and chairs) in unlabeled real pictures by aligning them with sets of synthetic images of cars grouped by pose.\\n\\n\\n################################################\", \"strong_points\": \"-Representation learning for grouped data is a relevant topic, for which the submission proposes a simple and effective method. \\n\\n-The authors present a practical use case in section 4.3, showing how the method can help tackle the domain gap problem. Also, experiments presented in Figure 3 uncovers some very intriguing properties that might be worth exploring further.\\n\\n-The paper is well organized, formalism is clear and related work is informative.\", \"weaknesses\": \"-ACC, as noted by the authors, is very similar to the previously introduced Temporal Cycle Consistency. The main contribution of this work is to provide empirical results when applied to a more general context.\\n\\n-The paper lacks crucial discussions about why ACC allows learning a good alignment between sets in the general context. The choice of ACC seems arbitrary, as I find unclear how ACC relates to latent spaces alignment.\\n\\n-While the experimental results are intriguing, they aren't very convincing in terms of the usefulness of the proposed method. Results in section 4.3 are obtained using a 2-dimensional latent space, which is useful for visualization but is a very odd choice for practical uses. Section 4.5 presents an interesting application but the fact that the authors had to use a complex pipeline for these relatively simple images raises concerns about how well the method can generalize to other problems or scale to more difficult images.\\n\\n\\n################################################\", \"rating_motivation\": \"The paper advocates for the use of ACC for representation learning but doesn't provide an explanation about why the method should work. This makes it difficult to assess how much of the results can be attributed to the use of ACC and how much is due to other inductive biases. Also, while the car experiments are interesting, the results by themselves aren't convincing enough.\\n\\nI might change my evaluation if this point is clarified (either via discussion or by additional control experiments).\\n\\n\\n################################################\", \"questions_and_minor_remarks\": \"-Can the authors comment on the importance of the size of the embeddings? Does the capacity bottleneck explain that geometric features are less prominent in figure 3c? What happens when using larger embedding?\\n\\n-Can the author confirm that phi:A->L and phi:B->M are implemented as two different convolutional networks? The paper is a little ambiguous.\\n\\n-The method is simple but reproducing would probably need some additional implementation details (architectures, optimization method, hyper-parameters...)\\n\\n-While the active/inactive features formalism is clear, it still might help to illustrate with examples how some classical tasks (for instance in domain transfer) fit in.\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Leveraging affinity cycle consistency to isolate factors of variation in learned representations\", \"review\": \"The paper presents an approach to isolate factors of variation using weak supervision in the form of group labels. The proposed method Affinity Cycle Consistency (ACC) claims to work with these group labels, which are weaker than the more common, one factor per group type labeling. An important aspect of this approach is that it does not attempt to disentangle the factors of variation, but only capture (or isolate) them in the latent space.\", \"following_are_the_strengths_of_the_paper\": [\"It uses a very simple strategy of combining soft nearest neighbors with cycle consistency in the latent space to achieve the ability of isolating factors of variation. The training of the network while imposing cycle consistency is done by simply minimizing the cross-entropy to predict the nearest neigbhor of an input point in the embedding space.\", \"Some of the empirical results are interesting, as the paper reports increased mutual information (MI) between the embeddings and the factors of variation of interest.\", \"While I appreciate the simplicity of the approach, there are some important concerns which the paper fails to address adequately.\", \"The analysis of empirical results is missing, which raises many questions.\", \"Experiments of Fig. 3 indicate that with one inactive factor, two of the remining five active factors are isolated, and with two inactive factors, one of the remaining four active ones is isolated. This behavior is not clear as to why it happens? Why are the other factors not isolated? What if we have other factors (the not so easy ones like scale, shape or pose) inactive, will still the background objects' factors will be isolated? This behavior should ideally be explained through further analysis experiments. Similarly for other datasets.\", \"It is not clear how the temperature value (as used in Definition 2) is set. How sensitive is the performance to this parameter.\", \"How is the dimensionality of the embedding space determined? How does it effect the isolation / disentangling performance? (Jha et al. 2018) seem to show that the latent space dimensionality impact disentangling performance. Could this be a reason for the other factors not being isolated in Fig. 3? The embedding dimensionality is only 2.\", \"Comparisons with group level supervision work\", \"(Bouchacourt et al., 2018) used group level supervision for disentanglement, and would have been more appropriate for comparisons than e.g., (Jha et al. 2018). It is not clear why no comparisons were made with this work.\", \"There are other mistakes, possibly typographical in nature.\", \"From Definition 2, it appears that the soft nearest neigbhor of l_i has larger \\\\alpha_j for points m_j that are farther away from l_i.\", \"Para before Definition 1, defines B, such that |B|=m, but it seems that it is meant to be n.\", \"My recommendation is to reject this paper, primarily because of the lack of analysis and ablative experiments, as well as comparisons to the related approach of (Bouchacourt et al. 2018). It is unclear why certain factors get isolated while others do not, based on the active and inactive sets of factors. Well-designed analysis experiments will perhaps bring about some additional insights.\", \"I believe the paper will make a far more compelling case if there are analysis experiments presenting the strengths of the approach that provides insights into why certain factors are easily isolated and others are not.\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This paper applies a weakly-supervised Affinity Cycle Consistency loss to recognize factors of variation in image data sets and learn the image embedding representation of each identified factor.\", \"review\": \"This paper applies a weakly-supervised learning approach to identify factors of object postures in an image dataset. The core idea is to introduce two sets of images. The first set is the reference data set with grouped objects of different active/inactive posture constraints. This set is used to provide weak supervision information in posture identification. The second set is the probe set. It does not necessarily require posture grouping of objects. Affinity Cycle Consistency loss is set up to automatically map objects of similar active postures between the two image sets (objects of similar postures are supposed to be the nearest neighbors in the learned embedding space). The experimental study verifies the validity of the proposed factor isolation algorithm.\\n\\nGenerally this paper is well written and clearly explains the motivation/problem definition. However, we have the following concerns on the innovative contribution of this work.\\n\\nThe innovation of this paper is very limited. The core technology applied in this work was originally proposed by Dwibedi et al in the paper \\\"Temporal Cycle-Consistency Learning\\\". As cited in Section.1 of this paper, this work employs directly the Cycle-Consistency learning mechanism (while in a different application scenario). The only difference is: this work is built based on affinity relation between pairs of images with similar postures, while the original idea was applied for temporal sequence alignment in video processing. Not significant algorithmic innovation is introduced, compared to the previous work. It is clearly below the threshold for a high-quality venue like ICLR.\", \"rating\": \"3: Clear rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review\", \"review\": \"This paper uses affinity cycle consistency to isolate factors of variation with only weak supervision on set membership. It has extensive experiments with both synthetic and real data.\\n \\nThe strength is that the problem setting is reasonable and important. The algorithm is sounding, and the evaluation is valid. The weakness is that the experiments and discussions do not cover enough cases for extracting and suppressing factors.\\n \\nI recommend that this paper is marginally above the border.\\n \\nThe positive reasons are the following. Isolating factors is a difficult and important problem, so the progress is good. Using weak supervision is practically convenient. Also, the paper shows that it helps a synthetic-to-real transfer problem. The following are some concerns.\\n1. Extracting all active factors may need more samples in set A.\\nIn the experiment of Figure 3c, second from the top, the learned representation does not isolate active geometric factors, indicating it does not include the information. This means set A does not contain enough variety of elements so that the object and floor hues are enough to distinguish the elements. To isolate all the active factors, set A should contain more elements than used in the experiment. It would be more helpful to find how many elements are necessary from theoretical and / or empirical perspectives.\\n2. Suppression of complicated factors.\\nThe paper does not tell why the inactive factors can be suppressed for unseen values, especially when it is inactive in both set A and B. When inactive, a factor takes a fixed value. So when it is inactive both in A and B, the factor has at most two values in training. However, in the test, it is suppressed for all (or most) possible values. The generalization capability from two values to many values requires more explanations, especially when the factor is complicated (e.g. non-linear) to extract. From experiment perspective, Figure 3c covers the cases of inactive \\u2018easy\\u2019 \\u2019hue factors. It would be helpful to see what happens when the three hue factors and a geometric factor (e.g. orientation) are all inactive.\", \"additional_feedback\": \"The last section is named \\u201cDiscussion\\u201d, but it looks like \\u201cConclusion\\u201d from its contents and position.\\nThe paper does not provide code to reproduce the results.\\nSome figures and letters on them are very small.\", \"grammars\": \"\\u201ca affinity cycle consistency loss\\u201d\\n\\u201cfor a each generative factor\\u201d\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
fylclEqgvgd | Transformer protein language models are unsupervised structure learners | [
"Roshan Rao",
"Joshua Meier",
"Tom Sercu",
"Sergey Ovchinnikov",
"Alexander Rives"
] | Unsupervised contact prediction is central to uncovering physical, structural, and functional constraints for protein structure determination and design. For decades, the predominant approach has been to infer evolutionary constraints from a set of related sequences. In the past year, protein language models have emerged as a potential alternative, but performance has fallen short of state-of-the-art approaches in bioinformatics. In this paper we demonstrate that Transformer attention maps learn contacts from the unsupervised language modeling objective. We find the highest capacity models that have been trained to date already outperform a state-of-the-art unsupervised contact prediction pipeline, suggesting these pipelines can be replaced with a single forward pass of an end-to-end model. | [
"proteins",
"language modeling",
"structure prediction",
"unsupervised learning",
"explainable"
] | Accept (Poster) | https://openreview.net/pdf?id=fylclEqgvgd | https://openreview.net/forum?id=fylclEqgvgd | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"9yxu3284Atv",
"jkW2ikDBLQ_",
"z_nyYFw_nG",
"-_tHCjK8_O-",
"II-nlX4-ZR7",
"5eYq0uVDx2S",
"ofLAPKInKfW",
"L8vzLw3gWz_",
"akhnP6uu9HV",
"x0Fqg1hrZXh",
"ivFah-V-rHc",
"rFdUCNIlG_R",
"34HFsILFxM",
"goICfmHPbVf",
"AsQVaEgEL_",
"a2VIZand6bJ"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040433421,
1606227237372,
1605466403988,
1605151285543,
1605151267031,
1605151077760,
1605150803098,
1605150775024,
1605150295026,
1605150070831,
1605149938284,
1605149910231,
1604283148514,
1603965711938,
1603839187818,
1602947717907
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3581/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3581/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3581/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3581/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3581/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3581/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3581/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3581/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3581/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3581/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3581/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3581/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3581/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3581/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3581/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The authors have done a very thorough job of responding to the comments from reviewers. The paper has a clear contribution, namely that attention maps predict contacts as well as existing unsupervised pipelines. This paper deserves to be published.\\n\\nIn the final version, the authors should discuss briefly \\\"BERTology Meets Biology: Interpreting Attention in Protein Language Models\\\"(https://openreview.net/forum?id=YWtLZvLmud7) and \\\"Improving Generalizability of Protein Sequence Models via Data Augmentations\\\" (https://openreview.net/forum?id=Kkw3shxszSd). However, the authors should also make sure that the final version respects the ICLR length limits.\\n\\nI am recommending poster acceptance because the results are anticlimactic given the recent success of Deepmind at CASP 2020.\"}",
"{\"title\": \"Substantial improvements to the manuscript\", \"comment\": \"The authors have addressed all the comments appropriately and have made substantial modifications to the paper considering my comments.\\n\\nIn my opinion, the related work section and the clear explanation on the supervised vs unsupervised contact prediction literature greatly improved the manuscript .\\n\\nI am still slightly concerned about reproducibility, as the modifications to the ESM architecture are not very clear to me. However, The authors have promised to share the weights, .\\n\\nIn the new version of the paper it is still not clear how many predictions have a Manhattan distance larger than 4. It would be good if the authors could provide a figure, or a table, detailing the distribution of predicted contacts and their respective Manhattan distances to the closest true contact. I appreciate that they provided the proportion of proteins with at least one predicted contact > 4 at different thresholds for contact probability, but in my opinion this does not give a clear enough picture.\\n\\nAll things considered, I believe the paper has improved substantially, and I am willing to increase my score.\"}",
"{\"title\": \"Updated Revisions\", \"comment\": \"In order to address concerns regarding related work and evaluation, we have updated our submission with the following changes:\\n\\n1. New Related Work section, describing unsupervised and supervised contact prediction along with relevant citations.\\n2. CASP 13 Evaluations (Section A.6, Table 4, and Figure 5).\\n3. Comparisons with mfDCA and PSICOV baselines.\\n4. Comparisons with Rives et al. 2020 supervised bilinear model on CASP13.\\n5. Computed mean squared error for calibration analysis, and show comparison on ESM models.\\n6. Removed speculative wording around sources of alignment failures.\\n7. Bootstrapping analysis (Section A.10)\\n8. Secondary structure analysis moved to appendix\\n9. Additional discovered mode of false-positive contacts shown\"}",
"{\"title\": \"Response to Reviewer 1 (Continued)\", \"comment\": \"\\\\>\\\\>Major comments\\n\\n\\\\>\\\\>1. Using Transformer attention maps for protein contact prediction is not new...\\n\\nWe address novelty of this work above and in the comment to all reviewers. Thank you for pointing out the need for a more extensive related work section, we agree and will add this section discussing both Rives, et al. 2020 and Vig, et al. 2020 as well as references suggested by the other reviewers. \\n\\n\\\\>\\\\>2. ...existing methods for contact prediction (beyond Gremlin), however, are not described sufficiently.\\n\\nWe agree a more detailed background is necessary and will add this to the manuscript.\\n\\n\\\\>\\\\>3. It is unclear which sequences were used for training the Transformer models and how similar they are to test sequences.\\n\\nESM transformers were trained using UniRef 50, which is noted in the introduction and in Figure 1. Note that the models and baselines are given access to the same set of sequences during test time. Since the number of similar sequences in the training set can be judged by the MSA depth of the sequence, we see in Fig 6 that as expected both the baseline and ESM perform better when there are more similar sequences available in the training set. However we show that ESM performs better than the baseline when fewer similar sequences are available in the ESM training set.\\n\\nWe will also add new experiments on CASP13 proteins to the revision. Since the model was trained on data available prior to CASP13, these sequences will not be in the training set.\\n\\n\\\\>\\\\>4. ...it is unclear how well they perform to the CASP state-of-the art (see also Rives et al. 2020).\\n\\nWe will add a comparison of unsupervised methods on CASP13 comparing the transformer attention maps, pseudolikelihood methods (Gremlin), mean field methods (Evcouplings), and sparse inverse covariance estimation (Psicov). We note that Gremlin is considered a state-of-the-art method for this problem.\\n\\n\\\\>\\\\>5. Section 3.4 does not describe clearly enough how attention maps were used for predicting contact maps...\\n\\nWe provide a much more detailed explanation of the logistic regression in Appendix Section A.6. Attention maps are 2D matrices, so we symmetrize via 0.5 * (A + A^T). All layers and heads were used as input features, for a total of 660 features in the ESM-1b model (33 layers * 20 heads). We describe and cite APC in Appendix section A.2. Thank you for pointing out that APC is not cited in the main text -- we will add this citation.\\n\\n\\\\>\\\\>6. Section 4.5 discusses that Transformers can be also used for secondary structure prediction. This is not new...\\n\\nWe agree that this is a peripheral result and for that reason the figures relating to secondary structure are in the appendix already. We thought it was interesting to describe as this is a different and more interpretable way to extract secondary structure from Transformer models than used in previous work e.g. Rives et al. 2019 and Vig et al. 2020 both of which did not use the attention maps. We note that local contacts (within a sequence separation of 6) can correspond to secondary structure, and so we use secondary structure as a proxy for analyzing the accuracy of contacts within this sequence separation range.\\n\\n\\\\>\\\\>7. Section 4.8: Using transformers for generating proteins with natural properties is not new (see Madani et al. 2020, \\u2018ProGen\\u2019 or Rives et al. 2020). \\u2018Wang & Cho\\u2019 were not the first who used Transformers generativity (see Vaswani, 2017).\\n\\nVaswani et al. use an autoregressive decoder transformer as opposed to an encoder. We cite Wang & Cho as the first to generate from a non-autoregressive encoder transformer trained with a masked language modeling objective. Rives et al. 2019 and Rives et al. 2020 do not show results on generating proteins. Madani et al. 2020 show that it is possible to generate proteins that might preserve natural properties. However there are key differences. First, they use autoregressive decoder transformers, rather than bidirectional encoders. Our analysis demonstrates that bidirectional encoder transformers can also be used to generate proteins with natural properties. Additionally, our approach can generate proteins in the neighborhood of an existing protein, which may be highly useful for tasks such as protein engineering. Finally our analysis shows that generated sequences directly preserve the statistics needed to infer protein contacts which is not shown by Madani et al. 2020.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"\\\\>\\\\>The paper shows that Transformers trained unsupervised on millions of protein sequences learn information about protein contacts by using attention maps for contact prediction...\\n\\nWe thank the reviewer for their time, interest in the paper, and constructive feedback.\\n\\n\\\\>\\\\>However, two recent papers that appeared on arXiv before the ICLR submission deadlines also use Transformers for protein contact prediction. See Rives et al, 2020, \\u2018Biological structures and functions emerge\\u2026\\u2019, section 5.2, and Vig et al, 2020, \\u2018Bertology\\u2019 section 4.2. \\n\\nThis is the first paper to show state-of-the-art results for unsupervised contact prediction from a transformer protein language model. Prior work Rives et al. 2020 benchmarks supervised contact prediction with deep residual networks. Vig et al. 2020 show that one specific head of the TAPE transformer is correlated with contacts (see Vig et al. 2020 Fig 4), but make no comparison to state-of-the-art methods for unsupervised contact prediction. In contrast, our work provides a new method that results in state-of-the-art performance on the unsupervised contact prediction problem. \\n\\nMoreover this is the first paper to demonstrate a state-of-the-art result for contact prediction from protein language modeling -- this is an important result for protein language modeling as previous work e.g. Rao et al. 2019, and Rives et al. 2020 have shown that protein language models fall well below state-of-the-art performance on supervised contact prediction tasks.\\n\\n\\\\>\\\\>These papers and other methods for contact prediction beyond Gremlin are not described...\\n\\nWe will add a more thorough description of related work addressing unsupervised and supervised contact prediction.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"We thank the reviewer for their time, interest, and helpful critique.\\n\\n\\\\>\\\\>Contributions of this paper are [...] showing that the attention maps built in Transformer-based protein languages learn protein contacts, and when extracted, they perform competitively for protein contact prediction ...\\n\\nWe agree on the contributions the reviewer has identified.\\n\\n\\\\>\\\\>However, I have a number of concerns.\\n\\nBelow and in the comment to all reviewers we outline a plan to address these concerns.\\n\\n\\\\>\\\\>However, this was reported before in Rives et al (2019)\\n\\nRives et al. 2019 does not report an analysis of attention maps. Rather, it uses the output from the final layer for supervised contact prediction.\\n\\n\\\\>\\\\>Also, several methods have been developed for this problem, but are not included in the comparisons.\\n\\nWe believe that pseudolikelihood maximization (and by extension Gremlin, which implements this method) is the current state-of-the-art for unsupervised contact prediction. To address the concern we will add a comparison to the Evcouplings implementation of mean-field inference, and to the Psicov implementation of sparse inverse covariance matrix estimation.\\n\\n\\\\>\\\\>I would recommend comparing transformers to other methods besides Gremlin...\\n\\nThank you for the suggestion. Section 2.2. of Adhikari 2016 describes evolutionary coupling-based methods. We note that Gremlin is an implementation of the pseudolikelihood based methods discussed in this section. The mean field approximation is also discussed here, as well as sparse inverse covariance matrix estimation. We propose adding comparisons to the mean field approximation (Evcouplings implementation) and the sparse inverse covariance matrix (Psicov implementation).\\n\\n\\\\>\\\\>Also, more recent methods that were published after the review are...\\n\\nThese citations are for supervised contact prediction methods which are all deep neural networks trained with supervision from many protein structures. A comparison to our unsupervised contact prediction method is not appropriate as the problem settings are fundamentally distinct. We will add a discussion of supervised methods to the related work section.\\n\\n\\\\>\\\\>However, from the plot in Figure 10a, it is not totally clear that the probabilities are well calibrated...\\n\\nWe agree that asserting the model is \\u201cwell calibrated\\u201d is unclear without a baseline. Since it is not obvious what the correct baseline should be, we will reword this \\u201cwe see that the model\\u2019s predicted probability is correlated with the actual contact probability.\\u201d We will add Pearson correlation between predicted and actual contact probability for the ESM1b model as well as the other transformer models as suggested.\\n\\n\\\\>\\\\>Could the authors report how many predictions have a Manhattan distance larger than 4...\\n\\nWe appreciate the suggestions and will update the manuscript with the number of predictions with Manhattan distance greater than 4. After the submission deadline we have analyzed an alternate failure mode where the hallucinated contact is not representative of a true contact. We will add an example of this failure mode as well. We note that in both the old and new failure mode, hallucinated contacts appear in the Gremlin contacts as well. An analysis of which failure modes are most common is a very interesting idea, but would require too much manual work to be completed in the rebuttal period.\\n\\n\\\\>\\\\>To make it reproducible...\\n\\nWe agree on the importance of reproducibility. We will make available contact prediction weights for ESM-1 and ESM-1b models allowing loading the models released by the ESM authors, along with a `predict_contacts` API. If you would like to review the code yourself, we will make the effort to anonymize it as much as possible.\"}",
"{\"title\": \"Response to Reviewer 2 (Continued)\", \"comment\": \"Major comments\\n\\n\\\\>\\\\>The main metric\\nPrecision at L is the standard contact prediction metric across the literature. Because having a small number of highly accurate contacts is useful (Skolnik et al. 1997, Kim et al. 2014), the field has standardized around this metric.\\n\\n\\\\>\\\\>When comparing ESM to the baseline Gremlin method\\n\\nGremlin generally performs well when sequences are filtered using an identity cutoff of 80-90% similarity. As per Rives et al. 2020, ESM used a sequence identity clustering at 50% to train their model, which might not be optimal for Gremlin. The trRosetta data is our attempt to optimize Gremlin performance as much as possible. We believe this is a very strong baseline since these MSAs were used to achieve sota results for supervised contact prediction in Yang et al. 2020. Note it uses an optimal sequence identity cutoff for Gremlin, along with a series of e-value similarity thresholds. Finally, it augments smaller MSAs with additional metagenomic data. \\n\\n\\\\>\\\\>The paper compares several transformer models\\n\\nThank you for the suggestion. We will add a table describing the differences in more detail and discussion to the paper describing how factors that vary between the models influence the results. We describe some ESM-1b changes in section A.5. The ESM-1b authors have made the model publicly available at https://github.com/facebookresearch/esm\\n\\n\\\\>\\\\>the sequences from the testing set of the contact prediction problem (or sequences highly similar to them) could appear in the training sets of the considered transformer models\\n\\nPlease note that there isn\\u2019t an information leakage problem here in the sense that the baseline has access to the very same or strictly more sequences than our model was trained on. From Figure 7 we see that there is clearly a correlation between MSA depth and performance for both ESM and the baseline. MSA depth should provide a good proxy not just for sequence similarity (which is merely distance to the nearest sequence) but for the density of similar sequences present in the training set. We find that ESM outperforms the baseline significantly when the MSA depth is low (few similar sequences in the training dataset), and believe this is one of the strengths of our approach.\\n\\nTo address this overlap more clearly, we will add results on CASP13 to the revision. Since training data for ESM-1b was generated prior to CASP13, these sequences will not have appeared in the training set. \\n\\n\\\\>\\\\>I wonder how robust the results of these analyses are\\n\\nIn order to improve analysis of robustness, we will show bootstrapped results for training 100 different logistic regressions using randomly sampled training proteins.\\n\\nIt is computationally expensive to train transformer protein language models, and so few are available for evaluation. We do show contact prediction results on multiple distinct transformer models, including models trained with different architectures by different groups on different data (ESM-1 6, 12, 34 layers, ESM-1b, ProtBERT-BFD, and TAPE).\\n\\n\\\\>\\\\>I would be very eager to see the comparison of 3D structure accuracy inferred with ESM-predicted and Gremlin-predicted contacts.\\n\\nWe would also be very interested in comparing 3D structure inferred with ESM versus Gremlin contacts. This is likely beyond the scope of a revision and would be interesting for future work.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"We thank the reviewer for their time and interest in the paper and for helpful comments.\\n\\n\\\\>\\\\>Several experiments are performed to showcase that (i) transformer-based representations can outperform state-of-the art methods based on MSA in terms of contact prediction precision; (ii) that the necessary information for contact predictions in these representations is learned in an unsupervised manner (and not by the logistic regression put on top of these representations); and (iii) that the contact prediction probabilities are reasonably well calibrated.\\n\\n\\\\>\\\\>In its current form the paper presents interesting analyses, but has overall limited novelty. The ability of transformer models to learn representations predictive of secondary and tertiary structure has been demonstrated before \\n\\nThe reviewer\\u2019s main concern appears to be novelty. However we note that the reviewer agrees in point (i) above that the paper shows \\u201ctransformer-based representations can outperform state-of-the art methods based on MSA in terms of contact prediction precision.\\u201d No prior work has shown state-of-the-art performance for unsupervised contact prediction from a protein language model.\\n\\nUnsupervised contact prediction is an important and well studied problem (discussed in more depth in the comment to all reviewers) that has seen little progress since the introduction of pseudolikelihood maximization -- the state-of-the-art baseline we use in this paper. Additionally in point (ii) the reviewer agrees \\u201cthat the necessary information for contact predictions in these representations is learned in an unsupervised manner.\\u201d This is also an important contribution of the work -- this paper is the first to show that state-of-the-art contacts are learned by Transformer language models in an unsupervised and interpretable manner.\\n\\nWe realize we have not well situated the paper w.r.t. the supervised contact prediction literature, and prior work with protein language models in the supervised setting. We will endeavor to address this in the revision incorporating feedback from reviewers and additional references.\"}",
"{\"title\": \"Response to Reviewer 4 (Continued)\", \"comment\": \"Major comments\\n\\\\>>1. Missing related work...\\n\\nThank you for the suggestions and we agree that there is significant prior work in protein contact prediction that should be cited. We will add a Related Work incorporating these suggestions and additional references. While Risselman et al. 2018 and Bepler & Berger 2019 show that deep unsupervised models may learn structural information, our work demonstrates that this information is interpretable and accessible with little or no supervision required.\\n\\n\\\\>>Before this work, others have looked at fine tuning language models for contact prediction...\\n\\nAs far as we are aware, previous approaches using protein language models for contact prediction (including Bepler & Berger 2018, Rives et al. 2019, and Rao et al. 2019) consider the supervised contact prediction problem. Here, we focus on the model\\u2019s ability to learn contacts without supervision. In particular, our top-1, 5, and 10 head results show that the model does not require any supervision at all to predict contacts.\\n\\n\\\\>>Many methods have surpassed GREMLIN for contact prediction using evolutionary couplings\\u2026. \\n\\nMany supervised contact prediction methods have surpassed Gremlin for contact prediction, including those using evolutionary couplings, however we believe pseudolikelihood maximization (which Gremlin implements) is still considered state-of-the-art for unsupervised contact prediction. We will add results on the CASP data to make this point more clear.\\n\\n\\\\>>Minor Comments:\\n\\n\\\\>>Although multiple sequence alignment methods have challenges...\\n\\nDickson & Gloor (2012) find that errors in the alignment can cause errors in downstream coevolution analyses. Malinverni & Barducci (2019) find that alignments that mix sub-families in an MSA cause errors in coevolution-based contact prediction. We will add references to these works and remove speculative comments.\\n\\n\\\\>>1. The authors use the language model without fine tuning...on MSAs...\\n\\nWe do perform this experiment (we call it \\u201cevolutionary finetuning\\u201d as proposed by Alley et al. 2019). We discuss it briefly in Section 4.2 and in further detail in section A.11. We find that fine-tuning on individual MSAs leads to a minimal increase in performance, likely due to rapid overfitting of the large model. We note that this analysis is limited -- it is possible that fine tuning only certain layers or otherwise limiting the model\\u2019s ability to overfit may improve performance. We leave this to future work. We also show that averaging over sequences in the MSA can provide similar benefits without the costs of fine-tuning.\\n\\n\\\\>>2. Eight iterations of jackhmmer is a lot...\\n\\nThank you for this insightful comment. We propose to re-do this experiment following the procedure of Zhang et al. 2019, performing jackhmmer iterations until an Neff of 128 is reached.\\n\\n\\\\>>3. How are sequence depths in Figure 3 calculated?\\n\\nWe use the raw number of sequences in Figure 3.\", \"things_that_would_improve_my_rating\": \"\\\\>>1. Provide a more comprehensive background review.\\n\\nThank you for the references -- we agree and will include this in the revision.\\n\\n\\\\>>2. Compare with state-of-the-art evolutionary coupling-based contact prediction methods.\\n\\nWe believe that pseudolikelihood maximization as implemented by Gremlin is the current state-of-the-art unsupervised contact prediction method. We note that two new methods for fitting an MRF to an alignment have been proposed (Vorberg et al. 2018, Figliuzzi et al. 2018), but have been shown to have nearly identical performance to pseudolikelihood maximization.\\n\\nWe specifically do not claim to achieve a state-of-the-art supervised contact prediction method. Instead, we claim that as with pseudolikelihood maximization, protein contacts naturally emerge from the unsupervised training signal in an interpretable and highly accessible manner. Therefore we do not believe that comparison to supervised methods, which incorporate significantly more information, is warranted.\\n\\n\\\\>>3. Compare with other language model-based contact prediction methods.\\n\\nSame as response to 2. Since prior language model-based contact prediction methods are trained with thousands of structures, it would be inappropriate to compare this setting (large models with millions of parameters trained with thousands of protein structures) with the unsupervised setting (logistic regression fit with zero to twenty proteins); these are fundamentally different problem settings.\\n\\n\\\\>>4. What should interest the general machine learning community about this paper?\\n\\nPlease see the note above and in the response to all reviewers. We view this paper as arguing for a fundamentally different interpretation of learned representations in transformers, one that is highly interpretable and directly maps to physical structures.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"We thank the reviewer for their time and attention to the paper and for detailed comments.\\n\\n\\\\>>The general concept of fine tuning protein language models for contact prediction has circulated for some time which lessens the core contribution,\\n\\\\>>The existence of previous language model-based contact prediction methods reduces the novelty of this work, especially given that the model used here is from Rives et al. 2019, who already look at contact prediction.\\n\\nWhile protein language models have been studied for contact prediction, e.g. Rives et al. 2019, Rao et al. 2019, this has been in the supervised setting. No existing work applies the models to the unsupervised contact prediction problem. This is the first work to demonstrate that unsupervised learning from a protein language model exceeds performance of state-of-the-art evolutionary couplings based unsupervised contact prediction.\\n\\n\\\\>>Overall this is an interesting work, though there is quite a bit of background on contact prediction missing.\\n\\nThank you for pointing out additional references. We will add a related work section covering contact prediction and other topics.\\n\\n\\\\>>This paper is also very application specific and may not present new machine learning methods of general interest to the ICLR community.\\n\\\\>>With this in mind, the manuscript may be better suited to submission at a biology specific venue.\\n\\nWe respectfully disagree. In this paper we propose an interpretable machine learning model that achieves state-of-the-art performance on an important unsupervised learning task in structural biology. This provides strong evidence that attention-based representations produced by unsupervised language modeling objectives can directly represent physical structures, which is of interest to the ICLR community.\\n\\n\\\\>>Furthermore, no comparisons with state-of-the-art evolutionary coupling-based or language model-based contact prediction methods are performed.\\n\\nPseudolikelihood maximization is the current state-of-the-art for unsupervised contact prediction (we use the Gremlin implementation). We will also add mean-field DCA (as implemented by Evcouplings) and sparse inverse covariance (Psicov implementation) as comparisons. There are no unsupervised language model-based contact prediction methods for comparison.\"}",
"{\"title\": \"References\", \"comment\": \"[1] Rives et al. (2020). Biological Structure and Function Emerge from Scaling Unsupervised Learning to 250 Million Protein Sequences.\\n\\n[2] Rao et al. (2019). Evaluating Protein Transfer Learning with TAPE.\\n\\n[3] Vig et al. (2020). BERTology Meets Biology: Interpreting Attention in Protein Language Models.\\n\\n[4] Bepler & Berger (2019). Learning protein sequence embeddings using information from structure.\\n\\n[5] Hopf et al. (2018). The EVcouplings Python framework for coevolutionary sequence analysis.\\n\\n[6] Lapedes et al. (1999). Correlated Mutations in Models of Protein Sequences: Phylogenetic and Structural Effects.\\n\\n[7] Thomas et al. (2008). Graphical Models of Residue Coupling in Protein Families. \\n\\n[8] Weigt et al. (2009). Identification of direct residue contacts in protein-protein interaction message passing.\\n\\n[9] Jones et al. (2012). PSICOV: precise structural contact prediction using sparse inverse covariance estimation on large multiple sequence alignments.\\n\\n[10] Alley et al. (2019). Unified rational protein engineering with sequence-based deep representational learning.\\n\\n[11] Zhang et al. (2019). DeepMSA: constructing deep multiple sequence alignment to improve contact prediction and fold-recognition for distant-homology proteins.\\n\\n[12] Skolnik et al. (1997). MONSSTER: a method for folding globular proteins with a small number of distance restraints.\\n\\n[13] Kim et al. (2014). One contact for every twelve residues allows robust and accurate topology-level protein structure modeling.\\n\\n[14] Dickson & Gloor (2012). Protein Sequence Alignment Analysis by Local Covariation: Coevolution Statistics Detect Benchmark Alignment Errors.\\n\\n[15] Malinverni & Barducci (2019). Coevolutionary Analysis of Protein Subfamilies by Sequence Reweighting.\\n\\n[16] Yang et al. (2020). Improved protein structure prediction using predicted interresidue orientations.\\n\\n[17] Vaswani et al. (2017). Attention is all you need.\\n\\n[18] Wang & Cho (2018). BERT has a Mouth, and It Must Speak: BERT as a Markov Random Field Language Model.\\n\\n[19] Madani et al. (2020). ProGen: Language Modeling for Protein Generation.\\n\\n[20] Balakrishnan et al. (2011). Learning generative models for protein fold families.\\n\\n[21] Seemayer et al. (2014). CCMpred--fast and precise prediction of protein residue-residue contacts from correlated mutations.\\n\\n[22] Vorberg et al. (2018). Synthetic protein alignments by CCMgen quantify noise in residue-residue contact prediction.\\n\\n[23] Figliuzzi et al. (2018). How pairwise coevolutionary models capture the\\ncollective residue variability in proteins.\\n\\n[24] Morcos et al. (2011). Direct-coupling analysis of residue coevolution captures native contacts across many protein families.\\n\\n[25] Ekeberg et al. (2013). Improved contact prediction in proteins: Using pseudolikelihoods to infer Potts models.\"}",
"{\"title\": \"Response to All Reviewers\", \"comment\": \"We thank all reviewers for their thoughtful comments and suggestions. We are pleased to see that every reviewer considers this an interesting work.\\n\\nIn particular, we are glad that all reviewers appreciate that this work demonstrates transformers learn contacts in an unsupervised manner outperforming state-of-the-art unsupervised pipelines (R4: \\u201csurprisingly data efficient and accurate\\u201d, R2: \\u201ccontact predictions in these representations is learned in an unsupervised manner\\u201d; R3: \\u201cperform competitively for protein contact prediction [...] does not require sequence alignments\\u201d; R1: \\u201clearn information about protein contacts by using attention maps\\u201d).\\n\\nThe primary concern highlighted by all reviewers appears to be the novelty of this work. The reviewers point out that significant prior work exists around contact prediction from protein language models, e.g. Bepler and Berger 2019, Rives et al. 2019, Rao et al. 2019, Rives et al. 2020. We agree with the reviewers that protein language modeling has been applied to contact prediction in the past; however prior work focuses on the **supervised contact prediction** problem.\\n\\nThe main novelty of this work is that it focuses on the **unsupervised contact prediction** problem showing state-of-the-art performance. Our work is the first to propose an interpretable unsupervised contact prediction method from protein language models. The use of attention maps in our method distinguishes it from the approaches used in prior work in the supervised setting. Our approach is also completely different to all evolutionary-coupling methods for unsupervised contact prediction, is competitive with pseudolikelihood maximization at all MSA depths, and especially improves on pseudolikelihood maximization for shallow MSAs (see Fig 3).\\n\\nUnsupervised contact prediction is well recognized as an important problem in its own right, evidenced by the breadth of prior work. Direct coupling analysis was initially described in Lapedes et al. 1999 and reintroduced by Thomas et al. 2008 and Weigt et al. 2009. Various methods have been developed to fit the underlying Markov Random Field, including inverse covariance (Morcos et al. 2011), sparse inverse covariance (Jones et al. 2012) and pseudolikelihood maximization (Balakrishnan et al. 2011, Seemayer et al. 2014, Ekeberg et al. 2013). Pseudolikelihood maximization is generally considered state-of-the-art for unsupervised contact prediction and is used as the baseline throughout. In order to provide a more thorough comparison to prior methods, we will also add mean-field DCA and sparse inverse covariance as additional baselines.\\n\\nNo prior protein language modeling work directly considers the unsupervised contact prediction problem and benchmarks against the current state-of-the-art. Rives et al. 2019 fits linear projections and deep residual networks to the final hidden representation of the language model, demonstrating that information about contacts is encoded in the model and can be identified by supervision. Both Rao et al. 2019 and Rives et al. 2020 consider the supervised contact prediction application using deep residual networks and benchmark against supervised methods. Vig et al. 2020 Fig 4 shows that a particular head of the TAPE transformer correlates with contacts. They do not extract contact predictions using the attention maps, nor do they report contact precision based on attention maps. In Fig 16, they do report contact precisions, but these are fit from the hidden representations using supervision from many structures.\\n\\nThe reviewers have pointed out that a better discussion of prior work is needed. We acknowledge we have not properly positioned our work with respect to the literature on supervised contact prediction. We will add a thorough discussion incorporating the suggested references, discussing supervised and unsupervised approaches, and clearly delineating the unsupervised problem.\\n\\nWe believe this work is relevant to the ICLR community. Unsupervised learning is a core topic within the conference. The combination of unsupervised representation learning at scale with interpretability in a state-of-the-art method has broad interdisciplinary interest. It is of particular relevance to ICLR that representations learned from unlabeled sequence data map directly to underlying physical structures.\", \"we_propose_the_following_plan_to_address_the_feedback_from_reviewers\": \"New related work section, with discussion of unsupervised and supervised contact prediction methods.\\nAdditional unsupervised contact prediction baselines (mean field DCA, and sparse inverse covariance).\\nResults on CASP13 to enable easier comparison with other methods.\\nMisc additional experiments detailed inline in response to reviewer comments.\\n\\nWe appreciate the thoughtfulness of the reviewers and believe that these changes will significantly improve the paper. We welcome any additional comments or suggestions from the reviewers or broader community.\"}",
"{\"title\": \"Interesting idea, but background and comparisons are lacking\", \"review\": \"In this manuscript, the authors present a method for predicting residue-residue contacts within protein structures using the attention layers learned by transformer language models. Using the largest transformer language models trained to data, the authors show good performance for contact prediction. The paper is clearly written and easy to follow.\\n\\nThe general concept of fine tuning protein language models for contact prediction has circulated for some time which lessens the core contribution, but the authors approach is surprisingly data efficient and accurate. Overall this is an interesting work, though there is quite a bit of background on contact prediction missing. This paper is also very application specific and may not present new machine learning methods of general interest to the ICLR community. The existence of previous language model-based contact prediction methods reduces the novelty of this work, especially given that the model used here is from Rives et al. 2019, who already look at contact prediction. Furthermore, no comparisons with state-of-the-art evolutionary coupling-based or language model-based contact prediction methods are performed. With this in mind, the manuscript may be better suited to submission at a biology specific venue.\\n\\nAdditional specific comments follow below.\", \"major_comments\": \"1.\\tMissing related work: there are a number of highly relevant prior works that are not mentioned/discussed. In particular, \\u201cDeep generative models of genetic variation capture the effects of mutations\\u201d \\u2013 Riesselman et al. 2018 was, as far as I know, the first paper to show that deep generative models capture structure information (see Figure 6). Following that, \\u201cLearning protein sequence embeddings using information from structure\\u201d \\u2013 Bepler & Berger 2019 was, to my knowledge, the first paper to propose deep language models (alignment free) for learning protein sequence representations and used those unsupervised representations for contact prediction. Furthermore, there has been extensive work in improving contact prediction using sequence + co-evolutionary features. See, for example, \\u201cEnhancing Evolutionary Couplings with Deep Convolutional Neural Networks\\u201d Liu et al. 2018 and \\u201cAccurate De Novo Prediction of Protein Contact Map by Ultra-Deep Learning Model\\u201d Wang et al. 2017. Other papers looking at protein structure prediction from sequence with deep learning, though they are less directly relevant, include \\u201cEnd-to-End Differentiable Learning of Protein Structure\\u201d AlQuraishi 2018 and \\u201cLearning Protein Structure with a Differentiable Simulator\\u201d Ingraham 2019.\\n2.\\tBefore this work, others have looked at fine tuning language models for contact prediction. How do those approaches compare with the approach presented here? Rives et al look at contact prediction in their manuscript describing the transformer model (which is the same model used here) on CASP 11-13 (see Table 5 in their manuscript). How does that approach compare with this one? Likewise for Bepler & Berger\\n3.\\tMany methods have surpassed GREMLIN for contact prediction using evolutionary couplings. How do those approaches compare with this one? It would be helpful to see how this approach compares with truly state-of-the-art contact prediction methods. Reporting results on the CASP data would help to make this comparison.\", \"minor_comments\": \"1.\\tAlthough multiple sequence alignment methods have challenges especially as related to evolutionary coupling prediction, these methods have been heavily optimized for decades. The authors should provide citations for claimed failings such as \\u201cfailure to find an optimal alignment\\u201d and \\u201csuboptimality of the substitution matrix and gap penalty.\\u201d Certainly, these may be sources of error in alignments, but I am not aware of any studies of the frequency or impacts of these errors on evolutionary coupling analysis. If these studies exist, I encourage the authors to cite them. If they do not exist, I suggest the authors focus on well known sources of error here (namely, alignment depth) and provide references.\\n2.\\tThe authors use the language model without fine tuning, but the model could be fine tuned for each protein using its MSA. It\\u2019s great that contacts can be predicted without fine tuning, but it would be interesting to investigate whether additional gains can be made.\\n3.\\tEight iterations of jackhmmer is a lot. In my personal experience, jackhmmer often diverges at 3+ iterations. By this I mean, the set of sequences and HMM learned by jackhmmer drift far away from the original sequence/family. Did the authors perform and quality checks of these alignments to ensure jackhmmer did not diverge?\\n4.\\tHow are sequence depths in Figure 3 calculated? Is this the raw number of sequences in each MSA or is it after applying some sort of neighborhood weighting to calculate an effective number of sequences?\", \"things_that_would_improve_my_rating\": \"1.\\tProvide a more comprehensive background review.\\n2.\\tCompare with state-of-the-art evolutionary coupling-based contact prediction methods.\\n3.\\tCompare with other language model-based contact prediction methods.\\n4.\\tWhat should interest the general machine learning community about this paper? What can we learn that might lead to better ML methods in the future? Convince me that this doesn\\u2019t belong in a bioinformatics venue!\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Interesting analyses, but has overall limited novelty\", \"review\": [\"**Summary**\", \"The paper performs a number of analyses centered around the ability of transformer-based language models trained on protein sequence data to learn representations useful for predicting protein secondary and tertiary structure (the latter as contact maps). Specifically, the paper studies several pre-trained transformer models by fitting an L1-penalized logistic regression to amino acid pair contacts. Several experiments are performed to showcase that (i) transformer-based representations can outperform state-of-the art methods based on MSA in terms of contact prediction precision; (ii) that the necessary information for contact predictions in these representations is learned in an unsupervised manner (and not by the logistic regression put on top of these representations); and (iii) that the contact prediction probabilities are reasonably well calibrated.\", \"**Score justification**\", \"In its current form the paper presents interesting analyses, but has overall limited novelty. The ability of transformer models to learn representations predictive of secondary and tertiary structure has been demonstrated before (including in the papers proposing the models used by the authors). Furthermore, I have some questions regarding the methodology employed by the authors.\", \"**Major comments**\", \"The main metric employed by the authors is the precision of the top L (protein length) contact prediction for a given range (P@L). I wonder why the authors do not also consider recall at L as an accompanying metric for reporting the results.\", \"When comparing ESM to the baseline Gremlin method, the authors consider two scenarios: (i) Gremlin trained on the trRosetta data; and (ii) Gremlin trained on the same data as the ESM transformer model. Overall, Gremlin trained on the ESM data - which is arguably the correct baseline for the ESM model - performs worse than Gremlin trained on the trRosetta data. Why is that the case? How does the procedure for preparing MSA for the ESM data compare to that of the trRosetta data? Can it be tuned to improve Gremlin's performance?\", \"The paper compares several transformer models that differ primarily in the model size, dataset size and hyper-parameters. As can be seen from Table 1 of the manuscript, these differences are clearly important for the contact prediction task and thus should be summarized and discussed in more detail.\", \"From what I understand the sequences from the testing set of the contact prediction problem (or sequences highly similar to them) could appear in the training sets of the considered transformer models. This creates some information leakage. It's unclear from the results presented in the paper whether it is an issue or not - how does contact prediction precision / recall change as sequence similarity to the ESM training set drops?\", \"The authors present analysis on the usefulness of the representations learned by various attention heads for contact prediction; and on robustness of such predictions. I wonder how robust the results of these analyses are - they appear to have been performed using a single checkpoint of the ESM model, which is a result of stochastic training from random initialization.\", \"In the Appendix the authors talk about the benefit of using predicted contact maps for inferring the all-atom protein 3D structure. However no results on this are presented. I would be very eager to see the comparison of 3D structure accuracy inferred with ESM-predicted and Gremlin-predicted contacts.\", \"**Minor comments**\", \"Introduction talks about the ESM-1b model but (as far as I can tell) a reference isn't provided until a later section.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Review for TRANSFORMER PROTEIN LANGUAGE MODELS ARE UNSUPERVISED STRUCTURE LEARNERS\", \"review\": \"In this paper, the authors show that transformer protein language models can learn protein contacts from the unsupervised language modelling objectives. They also show that the residue-residue contacts can be extracted by sparse logistic regression to learn coefficients on the attention heads. One of the advantages of using transformers models is that they do not require an alignment step nor the use of specialized bioinformatics tools (which are computationally expensive). When compared to a method based on multiple sequence alignment, the transformers models can obtain a similar or higher precision.\", \"contributions_of_this_paper_are\": \"- showing that the attention maps built in Transformer-based protein languages learn protein contacts, and when extracted, they perform competitively for protein contact prediction;\\n- a method for extracting attention maps from Transformer models;\\n- a comparison between a recent protein transformer protein language model (which does dot require sequence alignment), and a pseudo-likelihood-based optimization method that uses multiple sequence alignment;\\n- an analysis of how much the supervised learning (logistic regression) contributes to the results.\\n\\nThe paper covers a relevant topic and it is easy to read. \\n\\nHowever, I have a number of concerns. The main contribution of the paper is that attention maps built in Transformer-based protein languages learn protein contacts and can be used for protein contact prediction. However, this was reported before in Rives et al.(2019) (doi: 10.1101/622803). Also, several methods have been developed for this problem, but are not included in the comparisons. Finally, the provided implementation details are not sufficient to reproduce the results of the paper. \\nI detail some of these concerns below, together with questions/suggestions for improvements:\\n\\n1) I would recommend comparing transformers to other methods besides Gremlin, or justify why other methods were not included. This review can be helpful:\\n\\n(Adhikari B, Cheng J., 2016.. doi: 10.1007/978-1-4939-3572-7_24)\\n\\nAlso, more recent methods that were published after the review are:\\n\\n(Badri Adhikari, 2020. https://doi.org/10.1093/bioinformatics/btz593)\\n\\n(Luttrell et al., 2019. https://doi.org/10.1186/s12859-019-2627-6)\\n\\n(Gao et al.,2019. https://doi.org/10.1038/s41598-019-40314-1)\\n\\n(Ji S et al., 2019. https://doi.org/10.1371/journal.pone.0205214)\\n\\n2) On page 7, the authors state that \\\"We find that the logistic regression probabilities are reasonably well calibrated estimators of true contact probability and can be used directly as a measure of the model's confidence (Figure 10a)\\\". However, from the plot in Figure 10a, it is not totally clear that the probabilities are well calibrated. Could the authors add more justifications of why they consider it well calibrated? Could they also show a comparison of the calibration of the other transformer models, perhaps using MSE as a calibration metric?\\n\\n3) To understand the occurence of false positives, the authors analyze the Manhattan distance between the predicted contact and the true contact, which is between 1 and 4 for most false positives. They also show an example of a homodimer, for which predictions were far from the true contacts, and explain that the model is picking up inter-chain interactions. Could the authors report how many predictions have a Manhattan distance larger than 4? Is this one example representative of the group of false positives far from the true contact? Maybe the authors could analyse whether this happens in most of the cases.\\n\\n4) While ESM-1 is open-source and publicly available, this is not the case for ESM-1b. In section A.5, the authors provide implementation details as differences between ESM-1 and ESM-1b, stating \\u201cCompared to ESM-1, the main changes in ESM-1b are: higher learning rate; dropout after word embedding; learned positional embeddings; final layer norm before the output; and tied input/output word embeddings. The weights of all ESM models throughout the training process were provided to us by the authors.\\u201d. In my opinion, this is not enough to reproduce the results in this paper. To make it reproducible, the authors need to provide a detailed enough description of the differences to make the reader able to implement ESM-1b, or provide the weights and hyperparameters required to reproduce their results.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Using Transformers for protein contact prediction is not new\", \"review\": \"## Summary\\nThe paper shows that Transformers trained unsupervised on millions of protein sequences learn information about protein contacts by using attention maps for contact prediction. The paper is mostly clearly written and discusses server interesting ablation experiments. However, two recent papers that appeared on arXiv before the ICLR submission deadlines also use Transformers for protein contact prediction. These papers and other methods for contact prediction beyond Gremlin are not described. I therefore consider the contributions as insufficient for an ICLR submission.\\n\\n## Major comments\\n\\n1. Using Transformer attention maps for protein contact prediction is not new. See Rives et al, 2020, \\u2018Biological structures and functions emerge\\u2026\\u2019, section 5.2, and Vig et al, 2020, \\u2018Bertology\\u2019 section 4.2. Both publications appeared on arXiv at least one month before the ICLR submission deadline and are not clearly discussed in the paper.\\n\\n2. The introductions discusses existing work on Transformers for protein languages models. Existing methods for contact prediction (beyond Gremlin), however, are not described sufficiently.\\n\\n3. It is unclear which sequences were used for training the Transformer models and how similar they are to test sequences.\\n\\n4. The paper compares Transformers to Gremlin. However, it is unclear how well they perform to the CASP state-of-the art (see also Rives et al, 2020).\\n\\n5. Section 3.4 does not describe clearly enough how attention maps were used for predicting contact maps. How were attention maps symmetrized? Which layers and heads were used and how were they aggregated? What is the number of resulting features that were used to train the logistic regression model? APC is not described or cited.\\n\\n6. Section 4.5 discusses that Transformers can be also used for secondary structure prediction. This is not new (see Rives 2020 and Vig 2020) and does not fit well to the rest of the paper, which is about contact prediction. \\n\\n6. Section 4.8: Using transformers for generating proteins with natural properties is not new (see Madani et al, 2020, \\u2018ProGen\\u2019 or Rives et al, 2020). \\u2018Wang & Cho\\u2019 were not the first who used Transformers generativity (see Vaswani, 2017).\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
27acGyyI1BY | Neural ODE Processes | [
"Alexander Norcliffe",
"Cristian Bodnar",
"Ben Day",
"Jacob Moss",
"Pietro Liò"
] | Neural Ordinary Differential Equations (NODEs) use a neural network to model the instantaneous rate of change in the state of a system. However, despite their apparent suitability for dynamics-governed time-series, NODEs present a few disadvantages. First, they are unable to adapt to incoming data-points, a fundamental requirement for real-time applications imposed by the natural direction of time. Second, time-series are often composed of a sparse set of measurements that could be explained by many possible underlying dynamics. NODEs do not capture this uncertainty. In contrast, Neural Processes (NPs) are a new class of stochastic processes providing uncertainty estimation and fast data-adaptation, but lack an explicit treatment of the flow of time. To address these problems, we introduce Neural ODE Processes (NDPs), a new class of stochastic processes determined by a distribution over Neural ODEs. By maintaining an adaptive data-dependent distribution over the underlying ODE, we show that our model can successfully capture the dynamics of low-dimensional systems from just a few data-points. At the same time, we demonstrate that NDPs scale up to challenging high-dimensional time-series with unknown latent dynamics such as rotating MNIST digits. | [
"differential equations",
"neural processes",
"dynamics",
"deep learning",
"neural ode"
] | Accept (Poster) | https://openreview.net/pdf?id=27acGyyI1BY | https://openreview.net/forum?id=27acGyyI1BY | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"gGEtOvN6-j",
"LXjRAcQRz4m",
"kpl6WQ64Mcz",
"2Yfdzs1p1X-",
"2XJPvFix9_u",
"jxovmokIKek",
"8-V30vOXssJ",
"64fPeDyhr86",
"USP0L4qpds9",
"gK4uQKz0zjA",
"8TRH1_BOYDy",
"vQpRWtnGorZ",
"XiJTk3lKuXS",
"tCRQ72qqu_C"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040449883,
1606246760486,
1606214350595,
1606214297212,
1605962682683,
1605210511338,
1605208591434,
1605206002445,
1605205535115,
1605205468388,
1603935849822,
1603855185291,
1603811463129,
1603742492873
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3579/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3579/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3579/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3579/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3579/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3579/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3579/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3579/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3579/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3579/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3579/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3579/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3579/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This work proposes a stochastic process variant that extends existing work on neural ODEs. The resulting method allows for a fast data-adaptive method that can work well fit to sparser time series settings, without retraining. The methodology is backed up empirically, and after the response period, the reviewers' concerns are sufficiently addressed and reviewers are in agreement that the contributions are clear and correct.\"}",
"{\"title\": \"Comment\", \"comment\": \"We are pleased that we have been able to provide satisfactory answers to your questions and would like to thank you again for the helpful feedback!\"}",
"{\"title\": \"Comment\", \"comment\": \"We are glad our reply has addressed your concerns. Thank you for updating your score!\"}",
"{\"title\": \"Comment\", \"comment\": \"We are happy you have found our reply satisfactory. Thank you for updating your score!\"}",
"{\"title\": \"Updated manuscript\", \"comment\": \"Dear Reviewers,\\n\\nWe have uploaded a new version of our manuscript and made our code available with the supplementary. We have incorporated into this new version all the feedback that we have received. We believe that your comments have visibly improved the quality of the manuscript. A complete changelog can be found below.\", \"changes_to_the_manuscript\": [\"Added a new subsection called *Learning and Inference*, in which we include additional details about the training procedure and give our ELBO loss (R1, R2).\", \"Added a full derivation of the ELBO in Appendix A (R1, R2).\", \"Included pseudocode for the training procedure in Appendix B (R1, R2).\", \"We have included a graphical model description of NDPs in Section 3.1, which we hope will further clarify the generation and inference procedures of the model (R1, R2).\", \"Added further details about the tasks and the architectures that were used in the experimental section (R3).\", \"Added full task details for the 1D regression tasks in Appendix G.1 (R3).\", \"Added full task details for the Lotka-Volterra task in Appendix G.2 (R3).\", \"Added a detailed description of the architecture used in the low and high-dimensional experiments in Appendix F (R2, R3).\", \"Reformatted Table 1 to make the data easier to read.\", \"Added the motivation behind the split of $z = [l, z\\u2019]$ in Section 3.1 (R1).\", \"Added an ablation study for the size of the latent ODE state in Appendix E (R1).\", \"Added final MSE on the LV task for NPs and NDPs in the main text.\", \"Moved the stochastic process proofs to the supplementary (R2).\", \"Made the discussion section slightly more concise (R2).\", \"Added a time unit to the training time comparisons (Table 3) and a description of the system used for our experiments in Appendix D (R2).\", \"Many small fixes, including an overall \\u2018tightening up\\u2019 of our notation and language.\"]}",
"{\"title\": \"Initial Comment\", \"comment\": \"We would like to thank the reviewers for their thorough reviews, helpful comments, and actionable suggestions for improving our manuscript. We are of course delighted that the reviewers view our work favourably, and that they see our core innovation as interesting, important, applicable and worthy of sharing with the community. We are grateful that the reviewers have focused on concrete suggestions for improving the manuscript, particularly in the presentation of technical details, which we will be able to act upon within the discussion period and will surely improve the quality of the work. Furthermore, we appreciate the insightful questions that will help improve the clarity of our discussion.\\n\\nWe have responded to each review directly to address the specific comments they make. We will share an updated version of the manuscript, additional supplementary materials, and our codebase within the discussion period.\\n\\nAgain, we would like to express our gratitude for these constructive reviews and welcome further comments or requests.\"}",
"{\"title\": \"Response for AnonReviewer1\", \"comment\": \"We are grateful for the effort the reviewer has put into their feedback and we are convinced it will significantly improve the quality of our manuscript. We are pleased that the reviewer remarked on the impact our method can have in real-world applications, though we agree that the points raised by the reviewer deserve further clarification. We will release a new version of the manuscript during the discussion period to integrate this feedback as well as that of the other reviewers. We provide below a detailed response for each of the points that were raised.\\n\\n**Clarification about what\\u2019s going on with L and z\\u2019**\\n\\nAs in NPs, $z$ captures the global uncertainty, i.e. uncertainty over functions. The purpose of the split is to factorize the global uncertainty in the dynamics into an uncertainty in the initial position (\\u2018how things start\\u2019, given by $L(0)$) and an uncertainty in the ODE derivative function (\\u2018how things change\\u2019, conditioned by $z\\u2019$). This inductive bias is intended to help the model adapt well to tasks where either the initial conditions or the way the system evolves is fixed. As a concrete example, consider the motion of pendulums: for a single pendulum of fixed length, the variation in trajectories is confined to the initial conditions ($L(0)$). If instead we aim to model the motion of pendulums of different length, then there is variation both in the initial conditions ($L(0)$) and the way the system evolves (determined by $z\\u2019$).\\n\\n**How many dimensions should $l$ have? How does the dimension of $l$ affect the results?**\\n\\nIn general, the greater the dimensionality of $l$, the greater the range of dynamics that can be learned. This is the motivation behind Augmented Neural ODEs (Dupont et al, 2019), which append extra dimensions to the ODE. Extra dimensions were also shown to allow a Neural ODE to learn higher-order behaviour (Norcliffe et al, 2020). On the other hand, increasing the dimension permits overfitting. For the MNIST experiments, we found l = 40 to perform best in a (limited) hyperparameter search.\\n\\nWe will include an ablation study to show the effects of $l$ on performance for the Sine task.\\n\\n**Clarification of the learning procedure. How should $z\\u2019$ be learned?**\\n\\nWe train end-to-end using an ELBO loss, similar to NPs. Specifically, we use the following lower bound for the log-likelihood of the target set $t_{m+1:n}$, $y_{m+1:n}$ given a context set $t_{1:m}$, $y_{1:m}$:\\n\\n$\\\\log p(y_{m+1:n} | t_{1:n}, y_{1:m}) \\\\geq E_{q(z|t_{1:n}, y_{1:n})}\\\\Bigg[\\\\sum_{i=m+1}^{n} \\\\log p(y_i | z, t_i) + \\\\log \\\\frac{q(z|t_{1:m}, y_{1:m})}{q(z|t_{1:n}, y_{1:n})} \\\\Bigg] $\\n\\nAs usual, $Z$ is the latent variable from the amortised variational inference procedure, and the approximate posterior over $Z$ is given by the encoder. As in NPs, during training, we sample context and target sets of different sizes such that the model can become sensitive to the size of the context. The size of these sets is drawn from a uniform distribution over $\\\\{1, \\u2026, N\\\\}$, where $N$ is the maximum context size.\\n\\nWe will add these details to the main text as well as pseudocode and a full ELBO derivation in the supplementary, and make the code available to reviewers together with the new manuscript.\\n\\n**Why NDP is able to extrapolate well when there is variable angular velocity and angular shift (Fig. 5) and fails to extrapolate when there is constant angular velocity (Fig. 4)**\\n\\nThis is an interesting question. First, we note that NPs are not able to learn periodic functions that extrapolate indefinitely (a widely known result that has been formalised and explored recently by Ziyin et al. in Neural Networks Fail to Learn Periodic Functions and How to Fix It, NeurIPS 2020.) Neural ODEs however, are not constrained by this finding and are able to learn periodic functions (e.g. $(\\\\dot{x},\\\\dot{v}) = (v,-x)$ produces periodic motion in $x$).\\n\\nFor the original task (Rotating MNIST, Fig.4) the variation is only typographic, as the angular shift and velocity are fixed, so the dynamics are fixed. The NP model produces reasonable outputs over the first 16 frames because it\\u2019s learned a good approximation of sine over the interval $(0,2\\\\pi)$ but does not extrapolate. As there is no variation in the dynamics, the NDP is not learning to model the function as being periodic and its extrapolation closely matches that of the NP i.e. stops rotating and becomes noisy after 16 frames. So we conclude that the NDP model has collapsed to something like the NP as a result of the task not having any variation in the dynamics.\\n\\nIn the case of the task we introduce (Fig.5) there is also variation over the dynamics. In this case the NP performs much worse, with the outputs being indistinct blurs. However, as the NDP is able to learn periodic functions, it is able to produce legible reconstructions, and, as the model is now being tasked with actually learning a distribution over periodic dynamics, the periodic extrapolation quality improves.\"}",
"{\"title\": \"Response for AnonReviewer3\", \"comment\": \"We would first like to thank the reviewer for the detailed and useful feedback they have provided. We were glad to see that the reviewer has appreciated the usefulness of our method as well as the experimental validation of its advantages. We completely agree that more information should be provided about the tasks. However, we respectfully disagree regarding the missing comparisons and our reasoning is provided below. We will integrate these changes in a revised version of the manuscript, which we will release within the discussion period. We provide below a detailed response for the individual points that were raised\\n\\n**More task details needed**\\n\\nWe agree that further task and dataset details should be included. We will include both the used architectures and the dataset details. Additionally, our code will be included in our supplementary material. More details can be found below: \\n\\n*One-dimensional regression experiments*\\n\\nEach task is based on a random function described by a set of parameters that are uniformly sampled. A trajectory example is formed by sampling from the parameter distributions and then sampling from that function at evenly spaced timestamps, $t$, over a fixed range to produce 100 data points $(t,y)$ per function. \\n\\n- Sines - $y=a\\\\sin(t-b)$: $a$ range = (-1.0, 1.0), $b$ range = (-0.5, 0.5), $t$ range = (-$\\\\pi$, $\\\\pi$).\\n- Exponentials - $y = \\\\frac{a}{60}\\\\exp(t-b)$: $a$ range = (-1.0, 1.0), $b$ range = (-0.5, 0.5), $t$ range = (-1, 4)\\n- Straight Lines - $y=at+b$: $a$ range = (-1.0, 1.0), $b$ range = (-0.5, 0.5), $t$ range = (0, 5).\\n- Harmonic Oscillators - $y = a\\\\sin(t-b)\\\\exp(-0.5t)$: $a$ range = (-1.0, 1.0), $b$ range = (-0.5, 0.5), $t$ range = (0, 5)\\n\\nFor each task, we generate 490 such training sequences and use 10 for testing. To compute the standard error, each model was trained 5 times on each dataset, with a batch size of 5. \\n\\n*Lotka-Volterra*\\n\\nTo generate samples from the Lotka-Volterra system, we sample different starting configurations, $(u_{0}, v_{0}) = (2E, E)$, where $E$ is sampled from a uniform distribution in the range (0.25, 1.0). We then evolve the Lotka Volterra system parametrised by $(\\\\alpha,\\\\beta,\\\\gamma,\\\\delta)=(\\\\frac{2}{3},\\\\frac{4}{3},1,1)$. This is evolved from $t=0$ to $t=15$ and then the times are rescaled by dividing by 10. \\n\\nWe train on a set of 40 trajectories using a batch size of 5. We test on 10 separate trajectories. To compute the standard error, we train the models on 5 different seeds. \\n\\n*Rotating MNIST*\\n\\nFor the rotating MNIST experiment, we follow the approach from ODE2VAE (Yildiz et al, 2019). We remove the digit corresponding to the 4th rotation from the dataset and use it as a test frame. Additionally, we remove four random rotation angles from each sequence in the dataset to simulate irregularly sampled data. At testing time, when we use a context set of size one (first part of Table 2), we supply only the first frame in the sequence (same as ODE2VAE). When we use a larger context set (the second part of Table 2), we supply seven randomly sampled frames. In both cases, we report the MSE for the predicted test frame over all the sequences. \\n\\n\\n**Missing comparisons with ConvCNPs and SNPs**\\n\\nWe respectfully disagree that the work lacks comparisons with these models, primarily as the approaches are orthogonal to our own.\\n\\nConvCNPs aim to improve the way context sets are represented and encoded in the related family of Conditional NP models. Our method does not rely on a particular choice of encoder or representation. (In fact, we use a regular NP encoder in all of our experiments.) The methods (ConvCNP and NDP) are orthogonal, and the contributions from the ConvCNP paper could be employed in our model by simply replacing the encoder. In the interests of providing a clear exposition of the benefits of our approach, we opted to focus on improving directly upon the vanilla NP model, rather than including additional comparisons between baseline ConvNPs and ConvNDPs. However, we agree that it would be interesting to measure the potential synergistic effects of the two methods. NOTE: to clarify, the convolutional models we use for the image experiments are using regular convolutional encoders (i.e. a CNN instead of an MLP) and do NOT use a ConvCNP-like encoder. This is just an unfortunate choice of nomenclature. \\n\\nSequential NPs (SNPs) are a different kind of model again, as, unlike our method, which models a single stochastic process (as theoretically proven in the paper), SNPs model a dynamically changing sequence of stochastic processes, each of which is not necessarily defined over time. Therefore, the SNP method naturally targets a different set of applications from ours. Still, SNPs are also orthogonal to our approach. For instance, one could consider a hybridization that captures a dynamically evolving sequence of NDPs.\\n\\nWe have included a discussion of these orthogonal directions in our related work section.\"}",
"{\"title\": \"Response for AnonReviewer4\", \"comment\": \"We appreciate the positive feedback and we are happy the reviewer has highlighted the clarity of the structure, the novelty of our method as well as its applicability to a broad range of real-world datasets. We will also make available during the discussion period an improved version of the manuscript, which will incorporate all the feedback we have received.\"}",
"{\"title\": \"Response for AnonReviewer2\", \"comment\": \"We would like to thank the reviewer for the time they\\u2019ve put into this thorough and actionable review, which we believe will significantly improve our paper. We\\u2019re pleased that the reviewer appreciates the value in combining the NODE and NP frameworks, and we appreciate and agree with the various suggestions for improving the clarity of the manuscript. We will make a new version available within the discussion period that will address these comments as well as the suggestions from the other reviewers. We provide a detailed answer below for each of the points that were raised.\\n\\n**Being more thorough about the inference procedure, training procedure, and encoder architecture**\\n\\nWe will significantly expand on these aspects in the main text and will include pseudo-code in the supplementary material for the inference and training procedures. We are also working on making our code available to the reviewers in the next few days.\\n\\n*Encoder architecture*: the encoder is left unchanged in our method with respect to a normal NP. For the low-dimensional experiments, the encoder is an MLP that processes each $(t_i, y_i)$ pair. When $y_i$ is an image, the encoder is a convolutional network and we append $t_i$ (the time) to the convolutional features of the later layers. The embeddings $r_i$ for all $(t_i, y_i)$ pairs in the context are aggregated in a single embedding $r = mean(\\\\{r_i\\\\})$, which takes the mean of the individual embeddings. This $r$ is used to parametrise the normal distribution from which $z$ is sampled: $\\\\mathcal{N}(Linear(r), Linear(r))$. \\n\\n*Training & inference procedure*: indeed, we train end-to-end using an ELBO loss similar to the ELBO loss from NPs. Specifically, we use the following lower bound for the log-likelihood of the target set $t_{m+1:n}$, $y_{m+1:n}$ given a context set $t_{1:m}$, $y_{1:m}$:\\n\\n$\\\\log p(y_{m+1:n} | t_{1:n}, y_{1:m}) \\\\geq E_{q(z|t_{1:n}, y_{1:n})}\\\\Bigg[\\\\sum_{i=m+1}^{n} \\\\log p(y_i | z, t_i) + \\\\log \\\\frac{q(z|t_{1:m}, y_{1:m})}{q(z|t_{1:n}, y_{1:n})} \\\\Bigg] $\\n\\nWe will add a full derivation of this ELBO loss in the supplementary and hope this will make our training procedure precise. As in NPs, during training, we sample context and target sets of different sizes such that the model can become sensitive to the size of the context. The size of these sets is drawn from a uniform distribution over $\\\\{1, \\u2026, N\\\\}$, where $N$ is the maximum context size. For optimization, we use RMSProp with the default PyTorch parameters. \\n\\n**More information about the computational cost**\\n\\nAlong with the ratios of times NDP/NP, we will also include the training times for NPs over 30 epochs for the 1D datasets in the supplementary material. These were between 20 seconds and 100 seconds. For this, we used a machine equipped with an Nvidia Titan XP GPU.\\n\\n**Are different contexts trained in parallel?**\\n\\nYes. They are trained in parallel in the sense that we train over batches of sampled contexts from multiple time-series. In other words, our $t$ (time) batch has shape (batch_size, context_size, 1) and our corresponding $y$ batch has shape (batch_size, context_size, dim($y$)) . \\n\\nBecause Neural ODEs do not support batching in the usual sense, we use the following computational trick. We consider a bigger ODE with a state of dimension [latent_size * batch_size], which concatenates the independent states of the ODEs corresponding to each dimension in the batch. That allows us to evolve all the independent ODE states in the batch concurrently. We integrate this extended ODE state over the union of all the time steps in the batch. This \\u201cbatching\\u201d approach is common for Neural ODEs applied to (irregularly sampled) time-series (Rubanova et al., 2019)\\n\\n**Proofs could be moved to an appendix. A lot of space is devoted to the discussion and the conclusion.**\\n\\nWe will move the proofs to the appendix as the reviewer suggested. We also agree that the discussion and conclusion can be more concise. We will use the space gained from these changes to clarify the aspects from above.\"}",
"{\"title\": \"Review of Neural ODE Processes\", \"review\": \"This work presents a new method that combines neural ODEs, which uses neural networks to flexibly describe a non-linear dynamical system, and neural processes, which uses neural networks to parameterize a class of functions. By combining these two approaches, neural ODE processes can now adapt to incoming data-points as the latent representation parameterizes a distribution over ODEs due to the NP component of the model. The authors use different variants of the model (first order ODE, second order ODE, linear latent readout) to infer dynamics from data in the experiments section. I find this work to be an interesting and important extension to the existing literature on neural processes.\\n\\nMy primary qualms with the manuscript is that I found it difficult to glean some of the details about the model(s) and, in particular, the details of the inference procedure. I assume many of the same details in the original NP paper apply here, but it is not clear to what extent, and exactly how. Many of these important inference and model details seem to be missing from both the main text and the supplemental material. \\n\\nIn particular, when you first discuss the aggregator some key details are missing. You mention that the mapping from r to z could be a NN but you are not clear on when/if the encoder is a neural network in the actual experiments. Also, is it the case that data from multiple contexts are trained in parallel? It is important to specify all of the details for the encoder for each of the experimental sections. The decoder and the other pieces of the model are clear.\\n\\nMoreover, how exactly is this trained? SGD? Adam? Is it end to end? I assume you are optimizing an ELBO and the inference methods are akin to a VAE (or the original NP paper), but it is not explicitly said anywhere. Stepping through or at least explaining the primary details of training the model and the training objective will be useful. \\n\\nFinally, it is unclear how long this inference takes or what kind of computing resources are needed. Though there are some comparisons of training different versions of the model in the appendix, there is no sense of how long an 'epoch' is. Because there was no code that I could see with the submission, this is doubly difficult to glean. \\n\\nI think the proofs of secion 3.2 could be moved to an appendix. Additionally, a lot of space is devoted to the discussion and the conclusion; I would rather see more clarity provided to the implementation of NDPs and their differences at every stage of the model across the experiments.\\n\\nI am excited about the work and it does seem to be a useful extension of existing methods, and I think there are details that need to be clarified in order for this to be publishable.\", \"minor_details\": \"Bottom of page three \\\"the the\\\"\\n\\n\\n\\n%%%%% EDIT %%%%%%\\n\\nI am satisfied with the author's response and given the proposed changes will raise my score to a 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Review for NEURAL ODE PROCESSES\", \"review\": \"This paper proposes a new class of stochastic processes determined by a distribution over Neural ODEs. The overall structure of the paper is clear. I find the newly defined process interesting and applicable to many real data sets.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"Official Blind Review by reviewer3\", \"review\": \"This paper proposes a new algorithm that can adapt incoming data-points by applying Neural Ordinary Differential Equations (NODEs) to Neural Processes (NPs). It combines two algorithms properly and showed better performance than NPs through ODEs in the encoding, even with a smaller number of parameters.\", \"strengths\": \"1) They properly combined NODEs and NPs to fast-adapt few data points from underlying ODE over ODE distributions.\\n\\n2) They showed their algorithm outperforms NPs through ODE encoding with fewer parameters.\\n\\n3) They analyzed several variations like Second-Order Neural ODE Process or Latent-only version.\", \"weaknesses\": \"1) Task details are not clearly described. I checked the appendix also, but they just mentioned: \\\"with varying gradients and amplitudes and shifts...\\\". \\n\\n2) Lack of comparison with previous works: For instance, one of the advantages of this work is good interpolation and extrapolation. Convolutional Conditional NP (Conv-CNP, Jonathan Gordon et al., 2019) also outperformed other NPs methods for extrapolation, but they didn't compare Conv-CNP as one of the baselines. For the rotated MNIST experiment, Sequential Neural Processes (SNPs, Singh et al., 2018) isn't compared.\", \"the_correctness_of_their_claim_and_clarity\": \"This paper is well written and almost correct, but the details about the experimental setting look missed.\", \"additional_feedback\": \"Thank you for submitting it. I enjoyed reading it. I think that it is a well-written paper and deserved sharing in our community. However, detailed information (e.g., task details) is not clearly described, and some comparison results are missed. By updating those things, it will be more concrete. For the rotated MNIST experiment, evaluating the version applying NODEs to SNPs could be interesting also.\\n\\nMinor things are\\n\\nOn page 5, \\n\\\"....Additional details for every task considered can be found in C.\\\" -> \\\"Additional details for every task considered can be found in Appendix C.\\\"\\nSecondly, as is seen in A, NDPs train faster in a faster wall clock time than other variants. -> Secondly, as is seen in A, NDPs train faster in a wall clock time than other variants.\\n\\nOn page 7,\\nwe show the mean squared errors (MSE) for the 4th rotated MNIST digit in Table 2. -> what is the meaning of the 4th rotated MNIST?\\n\\n#####EDIT#####\\nI agree that the author's disagreement with my second comment and thank you for the update. I change my rate to 7.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"This paper introduces a new class of stochastic processes, called Neural ODE Processes (NDPs), by integration of Neural ODE (NODE) and Neural Processes (NP).\", \"review\": \"The proposed NDP has two main advantages: 1- it has the capability to adapt the incoming data points in time-series (unlike NODE) without retraining, 2- it can provide a measure of uncertainty for the underlying dynamics of the time-series. NDP partitions the global latent context $z$ to a latent position $l$ and sub-context $z^\\\\prime$. Then it lets $l$ follow an ODE, called latent ODE. This part is actually the innovation of the paper where by defining a latent ODE, the authors take advantages of ODEs to find the underlying hidden dynamics of the time-series. This assumption helps find better dynamics when the generating processes of time-series meet some ODEs. Then the authors define a stochastic process very like the idea from Neural Processes (NP) paper, that is, by defining a latent context $z$ (which here is a concatenation of $l$ and sub-context $z^\\\\prime$) with a prior p(z) and integrating a Gaussian distribution of a function of $z$ (decoder $g(l,t,z^\\\\prime)$ which is a neural network) over $z$.\\n\\nOverall, I liked the idea of the paper and how the authors integrate two important concepts, i.e. NODE and NP, into a single framework, which could be useful in many real-world time-series with complex underlying dynamics. However, I have some questions regarding some points in the paper:\\n\\n1- The paper says that $z$ is split into two parts: $l$ and $z^\\\\prime$, where $z^\\\\prime$ is kept unchanged over time and only $l$ follows an ODE. I wonder why is this the case? How many dimensions should $l$ have? How does the dimension of $l$ affect the results? Why not let the whole $z$ follow an ODE? There are no explanations and clarifications for these in the paper.\\n\\n2- There is no mention of how $z^\\\\prime$ should be learned. In general, there is no mention on how to train the NDPs. It is unclear in the paper what loss function should be optimized and how the latents should be learned. If it is by variational methods, how the posteriors of $z^\\\\prime$ and $l$ should be learned? I believe the authors should augment on these in the paper, otherwise it is very hard to know how the NDPs should be trained. \\n\\n3- What is the dimension of $l$ used for rotating MNIST experiments? Why NDP is able to extrapolate well when there is variable angular velocity and angular shift (Fig. 5) and fails to extrapolate when there is constant angular velocity (Fig. 4)? It seems the second is an easier task and I wonder why NDP has a poor performance? Does it imply that NDP can only work well in a specific conditions?\", \"4__typo\": \"page 3: the the decoder --> the decoder\\n\\n########## Edit ##########\\n\\nThe authors have addressed all my questions. Thanks.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
aLtty4sUo0o | Quickest change detection for multi-task problems under unknown parameters | [
"Firas Jarboui",
"Vianney Perchet"
] | We consider the quickest change detection problem where both the parameters of pre- and post- change distributions are unknown, which prevent the use of classical simple hypothesis testing. Without additional assumptions, optimal solutions are not tractable as they rely on some minimax and robust variant of the objective. As a consequence, change points might be detected too late for practical applications (in economics, health care or maintenance for instance).
Other approaches solve a relaxed version of the problem through the use of particular probability distributions or the use of domain knowledge.
We tackle this problem in the more complex Markovian case and we provide a new scalable approximate algorithm with near optimal performance that runs in $\mathcal{O}(1)$. | [
"Quickest Change detection",
"Parametric approach",
"Multi-task"
] | Reject | https://openreview.net/pdf?id=aLtty4sUo0o | https://openreview.net/forum?id=aLtty4sUo0o | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"xlztmPw1a0Y",
"ahuNr9lMl8k",
"1Bj-N3QdfI8",
"uuvfn4Hwyiu",
"r0ZVnmAq7mu",
"Sj_UkoO-SRM",
"2lQdfFNlxfJ",
"2R6ORWKTMC",
"q4BTjZuT-3l",
"M-3FimTEc6",
"i0U03TKMV3"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040491672,
1605637592377,
1605471611075,
1605364023552,
1605363876921,
1605363407116,
1605363037858,
1604462107592,
1603984304353,
1603890691438,
1603421462426
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3578/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3578/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3578/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3578/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3578/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3578/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3578/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3578/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3578/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3578/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper treats a relevant and challenging problem in sequential learning scenarios -- how to detect distributional change over time when the pre- and post-change distributions are not known up to certainty. All reviewers more or less acknowledge that the paper presents a new approach towards solving this inference problem, where the high level idea is to approximately learn the pre- and post-change distribution parameters online using gradient descent and then apply well-known tests for change detection (e.g., the Shiryaev or CUSUM rules) with these assumed to be the pre- and post-change parameters.\\n\\nHowever, beyond the concerns expressed by the reviewers, my finding after going through the manuscript myself is that the presentation of the paper's results leaves a lot to be desired in terms of clarity of exposition, comprehensiveness of performance benchmarking and comparison to existing approaches. Despite some of the reviwers' scores being revised upwards, the overall evaluation of the paper according to me is not adequate to merit acceptance, as per the concerns listed below. \\n\\n1. There are two settings assumed in the paper (beginning of Sec. 2): (a) a completely Bayesian one, with the pre- and post-change distributional parameters drawn from a prior \\\\cal{F} and the change time lambda drawn from a prior pi, and (b) a minmax one, where everything is the same as in (a) except that there is no prior over the change time lambda. However, it is not at all clear, in the algorithm design of the paper, where the prior \\\\cal{F} over the distributions is used in computing (or approximating) conditional probabilities such as P[lambda | v_alpha = n].\\n\\n2. There seem to be meaningless (or ill-defined) expressions in the paper's crucial portion motivating the algorithm design, such as P(X_t ~ f_{theta_0} | v_alpha = n), P(X_t ~ f_{theta_1} | v_alpha = n). It is hard to understand what the event \\\"X_t ~ f_{theta_0}\\\" even means -- I find it impossible to relate it to a sample path property. This leads me to question the validity of the technical development in the paper.\\n\\n3. Another undefined term is \\\"r-quickly\\\" in eq. (4); I had to dig through the classical work of Lai, and Tartakovsky-Veeravalli to get a formal definition for this term. This is not to be expected of a paper that attempts to develop a new change point detection procedure from scratch, especially to an audience (ICLR) that may largely be unfamiliar with classicalt change detection theory. \\n\\n4. There are several technical statements made without adequate formal proof, e.g., \\\"Given the optimal stopping time \\\\nu, it's possible to evaluate the posterior distribution of the change point P(lambda=t | v_alpha=n), which in turn is a good classifier of the pre and post change observation\\\". What the precise meaning of the term \\\"classifier\\\" is, what its \\\"goodness\\\" is, and how exactly it is related to the posterior distribution of lambda given the value of v_alpha, is formally not spelt out for a paper that largely uses formal probability language to develop its main results.\\n\\n5. While I understand that the final algorithm to detect the change involves several approximations and heuristics along the way, which may very well be intuitively appealing, I do not understand (even after repeated passes over the submission) several key aspects -- a concern also expressed by Reviewer 3. Why is it reasonable to assume that the conditional distribution of the change time lambda given the algorithm's stop time v_alpha would be logistic, and with the specific parameters mu and s given in the section \\\"Distribution Approximation\\\"? Moreover, it is hard to discern from the crucial Section 3.2 why the functions f_0^n, f_1^n should be useful in practice as proxies to the actual expected log likelihood ratios under the true parameters -- despite Lemma 2 showing that they converge to the true expectations (again, the sense in which this convergence occurs is omitted leading to imprecision in the statement), the rates as a function of n, t_2 may be slow. I agree in this regard with the same concern voiced by Reviewer 1, and do not see a satisfactory explanation to it in the paper's discussion.\\n\\n6. Comparison to literature. Contrary to the general picture painted in the paper about the lack of sufficient investigation of the \\\"unknown pre and post change parameter\\\" case, there does seem to be a rigorous body of work existing in this line that is not discussed in the manuscript. For instance, \\\"SEQUENTIAL CHANGE-POINT DETECTION WHEN THE PRE- AND POST-CHANGE PARAMETERS ARE UNKNOWN\\\", Lai and Xing, 2009, and \\\"A BAYESIAN APPROACH TO SEQUENTIAL SURVEILLANCE IN EXPONENTIAL FAMILIES\\\", Lai-Liu-Xing, 2009, are both works that address this very setting and in a comprehensive manner with theoretical guarantees. What the current manuscript does, in the context of both these works, is highly unclear. Is it trying to suggest an approximate way of computing the natural posterior distribution of the change time lambda given all data up to now, using the proxy P(lambda | v_alpha = n), or using a completely different approach altogether, is not adequately discussed at all, which makes the motivating arguments for the algorithm vague.\\n\\n7. Finally, but in no lesser measure, the Experimental Results section features a rather narrow set of (two) scenarios for which it presents numerics. For a paper that claims to demonstrate \\\"experimental results (over a wide variety of settings)\\\" [from the author response], this is quite telling as it renders the argument in favor of the paper's approach quite weak. Here again, for the first (synthetic) setting, I do not understand the relevance of the neural network adopted to fix the parameters of a Gaussian distribution. Moreover, the reported distributions of the \\\"regretted detection delay\\\" seem to be quite wide for all the approaches compared (unknown params, adaptive, GLR), precluding a reasonable comparison of their performance. The author(s) would do well to expand the scope of both synthetic and non-synthetic experiments to show the validity of their approach, and in each case carry out many more independent trials than just 500 for more accurate benchmarks.\\n\\nI do note that more experimental results have been reported in the appendix, but I would presume that they have more value being in the main body after the algorithm design is explained in a more succinct and clearer manner. This can only come about by a significant rewriting and reorganizing of the paper, which I am confident the author(s) can carry out in order to make this into a much stronger submission. I wish the author(s) good luck on this, and hope to see the strengths of this new approach brought out in a more impactful manner in the next revision.\"}",
"{\"title\": \"We thank the reviewer for this support\", \"comment\": \"We agree that discussing alternative divergence measures that are coherent with the QCD objective are indeed useful for the readers. We introduced a new subsection in the appendix (C-3 in the latest version), where we modify the loss function in order to optimise $E_k[g(\\\\frac{f_{\\\\theta_1}}{f_{\\\\theta_0}})]$ which is, under mild assumptions, similar to optimising an f-divergence measure where $f(x)=x.g(x)$.\", \"we_also_included_empirical_evaluation_of_the_add_using_different_f_divergences\": [\"$g(x) = \\\\log(x)$ (the KL divergence)\", \"$g(x) = \\\\sqrt{x}-1$\", \"$g(x) = (x-1)*\\\\log(x)$\", \"We provided the adaptive and the GLR average detection delay as a baseline.\", \"The experiments uncover yet again the great performances of our algorithm.\"]}",
"{\"title\": \"Happy with the Author(s) response\", \"comment\": \"Thanks very much for all the clarifications especially the difference in contribution in comparison to the Tartakovsky and Veeravalli (2005) paper. I still would appreciate a bit of discussion with there divergence measures. I understand the page limit constraint and also the fact that KL divergence appears naturally in the asymptotic results however some remark on divergence measures would be useful for readers!\\n\\nI changed my rating to \\\"Accept\\\".\"}",
"{\"title\": \"We thank the reviewer for his appreciation and thank them for their suggestion.\", \"comment\": \"The proposed experiment is quite interesting as it benchmarks our algorithm's performances in the usual, simpler framework (where pre-change parameter are known). We even found out the following results:\\n- TWR (without prior knowledge of the pre-change distribution) still beats the adaptive algorithm (with prior knowledge of the pre-change distribution) \\n- TWR (with prior knowledge of the pre-change distribution) has a small constant delay with respect to the GLR approach! \\n \\nWe will include this in the next update for the paper\\n \\n \\nConsidering the second minor remark, there is indeed a typo in the expression of $S_n^{\\\\theta_0,\\\\theta_1}$ that should read:\\n \\\\begin{equation*}\\n S_n^{\\\\theta_0,\\\\theta_1} = \\\\frac{1}{(1-\\\\rho)^n} \\\\sum_{k=1}^n (1-\\\\rho)^{\\\\textbf{k-1}} \\\\prod_{t=k}^n L_t(\\\\theta_0,\\\\theta_1)\\n \\\\end{equation*}\\nThanks for spotting it !\"}",
"{\"title\": \"We understand the reviewer's concern, and hope the following answers provide a satisfying response.\", \"comment\": \"**Heavy dependence on Tartakovsky and Veeravalli (2005) results with known parameters. The contribution in the current paper seems a bit incremental.**\\n \\nWhat's fundamentally different from previous contributions in the field is that we are evaluating the Shiryaev stopping time rather actually detecting the change point. To this extent, [Tartakovsky and Veeravalli (2005)] is irrelevant.\\n\\nIt's true that the asymptotic behaviour of the Shirayev algorithm is a corner stone of our approach (as we segment the data accordingly), but we do not believe that this reduces the value of our contribution.\\n\\nIn addition, simply using the asymptotic behaviour does not solve the QCD problem (cf Lemma 3) and we need to add safety brakes (annealing and penalisation) to have a fully working, state-of-the-art approximate algorithm. \\nWe also provide experimental results as well as theoretical results that ground our reasoning. \\n \\n**The algorithm 1 looks promising however some of the hyper parameter choices such as c, $\\\\epsilon$, $B_\\\\alpha$ and $N_e$ are not clear.**\\n \\nWe discuss the hyper-parameter selection in the Appendix E-1 and C-2 in order to respect the page count limit imposed by ICLR. Roughly speaking, they can be of three different types. \\n \\n- optimisations parameters (number of epochs $N_e$, gradient step, ...)\\n- problem dependent parameters ($\\\\epsilon$ and $c$) (their choice depends on the KL divergence of the pre and post change distributions as well as the mixing times of the associated Markov chains)\\n- practitioner dependent parameters ($B_\\\\alpha$) (describes how much we tolerate false alarms) \\n \\n**For a practitioner which one is better to choose- Shirayev test-statistic or Cusum?**\\n \\nIt depends on how pessimistic the practitioner is. \\nIf you are pessimistic, you should optimise the \\\\textbf{WADD} and thus use the CuSum statistic. If you are optimistic you should optimise the \\\\textbf{ADD} and thus use the Shirayev statistic. \\n \\nPersonally, we prefer the CuSum statistic as it is slightly more stable.\\n\\n**It was not quite clear to me why does one need both annealing and penalisation? I thought adjusting underestimation/overestimation will be sufficient as behavior of pre-changepoint will complement the behavior post change-point.**\\n\\nThe stopping time defined in Equation (9) suffers from two problems. \\n 1) using only pre change parameters before the true change point $\\\\lambda$ leads to over-estimating the statistic \\n 2) using corrupted data after the Shiryaev change point %under known parameters $\\\\nu_\\\\alpha^S$ \\n \\nBasically, penalising the likelihood ratio when $\\\\theta_t^0$ and $\\\\theta_t^1$ are similar solves the first issue, while the annealing solve the second one by taking into account the delay of detecting the Shiryaev change point under known parameters $\\\\mathbb{E}[|\\\\nu - \\\\nu_\\\\alpha^S|]$.\\n \\nAs a consequence, the two principles need to be implemented together to yield the best possible performance. We provide in Appendix C an impact and ablation analysis of both procedures. \\n\\n**Does other choice of entropy based loss functions matter instead of KL divergence or KL divergence is the most natural choice here?**\\n\\nIt's possible to consider other forms for the loss function, however as the KL divergence appears directly in the expression of the asymptotic delay under known parameters, we took it as the most natural choice here.\"}",
"{\"title\": \"We thank the reviewer for his feed-back, and hope the following answers satisfy their concern.\", \"comment\": \"**The proposed algorithms solve an optimisation problem (depending on the setting) for minimising the delay in change point detection. As no deep learning models (or even a variant) is used to solve the change point detection problem considered in the paper, this paper seems to be outside of the ICLR scope.**\", \"we_respectfully_disagree_with_the_reviewer_regarding_this\": \"the call for paper welcomes \\\"submissions from all areas of machine learning and deep learning\\\" (and even more precisely, solving the scalability issues of QCD problems falls in the \\\"implementation issues\\\" category).\\n \\nChange point problems have also been deemed, in the past, as a natural fit for ICLR:\\n\\n - Pyramid Recurrent Neural Networks for Multi-Scale Change-Point Detection [ICLR 19']\\n - Kernel Change-point Detection with Auxiliary Deep Generative Models [ICLR 19']\\n - Bayesian Time Series Forecasting with Change Point and Anomaly Detection [ICLR 18']\\n \\nWe hope that this convinces you that our work is relevant to the ICLR community and hope you would revise your score according as this relevancy issue seemed to be your main concern.\\n \\n**I find it very difficult to understand the paper in even 2-3 read. The authors need to improve overall writing quality so that it becomes easier to read and understand.**\\n\\nAs the other reviewers didn't point out difficulties reading the paper (some complemented its quality actually), please point out unclear parts of the paper so that we can revisit them for a better readability. \\n\\n**Some notations are not defined upfront**\\n \\nWe define formally $S_n^{\\\\theta_0,\\\\theta_1}$ and $B_\\\\alpha$ in the Appendix A-1. However given the page limit in the main document we couldn't fit them upfront and we restricted ourselves with some insight about the nature of these quantities. We will point the reader in the next version to appendix A-1 for further details.\"}",
"{\"title\": \"We understand the reviewer's concern, and hope the following answers provide a satisfying response.\", \"comment\": \"**The paper title is for multi-rask problems. However, it seems to me that the proposed algorithm is very general for change detection problem. Except the one subsection in the experiments, I didn't see much connection to multi-task problems.**\\n\\nWe agree with the reviewer that our work is very general and can have a multitude of other possible applications. However, the QCD literature usually assume a nominal pre-change behaviour. Multi-task problems are a setting in which this assumption does not hold up.\\n \\nA crucial feature for multi-task RL is obviously the ability to detect change points in a tractable way. This is what brought us to this problem. Many other motivating examples are introduced in Appendix.\\n \\nHowever if you still find the title misleading, we propose to remove the 'multi-task' part from the title. \\n \\n**The theoretical results are not very strong. There is no Theorem one can claim for the performance of the proposed algorithm. As the algorithm is an approximation to some optimal approach, one may provide a result in the form of competitive ratio or convergence rate. However, Lemma 3 is only some asymptotic behavior of the loglikelihood.**\\n \\nWe agree that our results do not provide strong non-asymptotic convergence guarantees. However, our theoretical results (Lemma 1-3) justify that our algorithm efficiently approximates $L_t^*$. We believe that this in itself is an asymptotic performance guarantee: \\n \\nIn fact, evaluating the performance of our algorithms can boil down to evaluating the detection delay of the Shiayev stopping time $ \\\\mathbb{E} [| \\\\nu - \\\\nu_\\\\alpha^S |] $. In our setting, this quantity is arguably proportional to the estimation error $\\\\mathbb{E}[|S_n^{\\\\theta^0_t, \\\\theta^1_t} - S_n^{\\\\theta_0^*, \\\\theta_1^*}|]$. which in turn is proportional to $\\\\mathbb{E}[|L_t - L_t^*|]$. \\n \\nHowever deriving explicitly this relationship is not only cumbersome (integrating over all possible stopping time scenarios), but also require additional assumptions over the convergence rate of Equation 5.\\n \\nFor these reasons, we chose to defend our algorithm with theoretically grounded insights and experimental results (over a wide variety of settings) rather than additional assumptions over this speed of convergence. \\n\\n**How should one choose the hyper-parameters like c and epsilon? Are the results in section 5 tuned by grid search and presented the best one?**\\n \\nWe provide in Appendix E-1 some insight over the best practice to use when selecting these parameters. In a nutshell they depend on how much variability is expected in your data after the change (True KL divergence between pre- and post- change distributions) and on the stationarity of your distributions (Mixing time). \\n \\nThe hyper-parameters in the presented results have been indeed tuned by grid-search.\"}",
"{\"title\": \"Interesting idea, not very strong results\", \"review\": \"This paper studies the change point detection problem. The classical studies in change detection problems are based on the known prior and posterior parameters, i.e., knowing the distribution (parameters) before and after the change points. Recently, people are extending the results to the case where the prior parameter is known and the posterior parameter is unknown (anomaly detection) or with some sampling cost constraints (data-efficient change detection). However, this work proposes an algorithm that generalizes the CUSUM approach to the case where the parameters are unknown. The idea is very interesting and I believe the impact of the algorithm could be of significance given its potential in real-world applications. Besides, I have the following comments.\\n\\n1) The paper title is for multi-rask problems. However, it seems to me that the proposed algorithm is very general for change detection problem. Except the one subsection in the experiments, I didn't see much connection to multi-task problems.\\n\\n2) The theoretical results are not very strong. There is no Theorem one can claim for the performance of the proposed algorithm. As the algorithm is an approximation to some optimal approach, one may provide a result in the form of competitive ratio or convergence rate. However, Lemma 3 is only some asymptotic behavior of the loglikelihood.\\n\\n3) How should one choose the hyperparameters like c and epsilon? Are the results in section 5 tuned by grid search and presented the best one?\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Paper is marginally below acceptance threshold\", \"review\": \"***** Paper's Summary *****\\n\\nThis paper considers the quickest change detection (QCD) problem where pre-change and post-change distributions are unknown. For such problems, the authors proposed approximate algorithms in MIN-MAX and Bayesian settings. The algorithms run in O(1) and have near-optimal performances. The performance of proposed algorithms is verified using synthetic data and a reinforcement learning environment.\\n\\n\\n***** Paper's Strengths *****\\n\\nThe proposed algorithms are the approximate methods that have a near-optimal performance for QCD problems with unknown pre-change and post-change distributions. \\n\\nThe proposed algorithms are scalable and having low detection delays. Further, these algorithms work for a more general class of problems as they do not require restrictive conditions like IID samples, specific distributions, etc. on the problems.\\n\\nThe performance of proposed algorithms is better than existing algorithms. \\n\\n\\n***** Paper's Weaknesses ***** \\n\\nThe proposed algorithms solve an optimization problem (depending on the setting) for minimizing the delay in change point detection. As no deep learning models (or even a variant) is used to solve the change point detection problem considered in the paper, this paper seems to be outside of the ICLR scope.\\n\\nI find it very difficult to understand the paper in even 2-3 read. The authors need to improve overall writing quality so that it becomes easier to read and understand. \\n\\n\\n***** Comments ***** \\n\\nSome notations are not defined upfront e.g., Line 2 on Page 3: $S_n^{\\\\theta_0, \\\\theta_1}$ and $B_\\\\alpha$. \\n\\n\\n***** Questions for the Authors *****\\n\\nPlease comments on how your paper fits the ICLR scope. \\n\\n\\n***** Post Rebuttal *****\\n\\nThank you for your clarifications! After reading the rebuttal and comments of other reviewers, I am increasing my score.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper\"}",
"{\"title\": \"A relevant topic but the paper falls short of acceptance level\", \"review\": \"The author(s) propose a quickest change detection technique under known parameter scenario. They use a Markovian dynamics to generate the pre and post change-point distributions and use a Shirayev test-statistic based on the asymptotic behavior of the optimal delay under know parameters. The proposed methodology is validated on synthetic data and a multitask reinforcement learning example. There are several issues which restricts the paper to reach an optimal level. These are highlighted below\\n\\n-Heavy dependence on Tartakovsky and Veeravalli (2005) results with known parameters. The contribution in the current paper seems a bit incremental.\\n\\n-The algorithm 1 looks promising however some of the hyper parameter choices such as c, $B_\\\\alpha$, $\\\\epsilon$ and $N_e$ are not clear.\\n\\n-For a practitioner which one is better to choose- Shirayev test-statistic or Cusum?\\n\\n-It was not quite clear to me why does one need both annealing and penalisation? I thought adjusting underestimation/overestimation of ${L_t}^*$ will be sufficient as behavior of pre-changepoint will complement the behavior post change-point.\\n\\n-Does other choice of entropy based loss functions matter instead of KL divergence or KL divergence is the most natural choice here?\\n\\n-\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Recommendation to Accept\", \"review\": \"This paper studies the quickest change detection for Markovian data, when both the parameters of pre- and post-change distributions are unknown. The main contribution is a scalable algorithm that sequentially estimates the unknown parameters and plug-in to classical detection schemes to get the stopping rule. A notable feature is that this is a joint estimation and detection framework. And the authors incorporate several tools, like SGD, annealing, penalization, into the detection task, which turns out to have good performance compared with existing benchmarks.\\n\\nOverall, this paper is clearly-written and well-organized, and the numerical examples support the claims made in the paper.\", \"minor_comments\": \"1. Usually in classical change-point detection literature, people assume the pre-change distribution is known since it can be estimated from historical (nominal) data, and the framework proposed in this paper can obviously be applied in such a setting as well. Therefore, I think it might be interesting to add one comparison in such setting (i.e., only post-change parameters is unknown and need to estimated). In such a case, the GLR and adaptive methods do not need to learn theta_0 offline and we can have a fair comparison of the performance of learning post-change parameters and also the detection delay. \\n\\n2. In Appendix A.1, the introduction to SHIRYAEV Algorithm, it seems that there is a missing \\\\rho in the denominator of the statistics S. The reason is that only under this \\\\rho-scaled version of likelihood-ratio can the recursion in A.1 holds. \\n\\n--------- After rebuttal ---------\\nThanks to the authors for the response and updated paper. I keep my original score and recommend acceptance for this paper.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
JdCUjf9xvlc | Fourier Representations for Black-Box Optimization over Categorical Variables | [
"Hamid Dadkhahi",
"Jesus Rios",
"Karthikeyan Shanmugam",
"Payel Das"
] | Optimization of real-world black-box functions defined over purely categorical variables is an active area of research. In particular, optimization and design of biological sequences with specific functional or structural properties have a profound impact in medicine, materials science, and biotechnology. Standalone acquisition methods, such as simulated annealing (SA) and Monte Carlo tree search (MCTS), are typically used for such optimization problems. In order to improve the performance and sample efficiency of such acquisition methods, we propose to use existing acquisition methods in conjunction with a surrogate model for the black-box evaluations over purely categorical variables. To this end, we present two different representations, a group-theoretic Fourier expansion and an abridged one-hot encoded Boolean Fourier expansion. To learn such models, characters of each representation are considered as experts and their respective coefficients are updated via an exponential weight update rule each time the black box is evaluated. Numerical experiments over synthetic benchmarks as well as real-world RNA sequence optimization and design problems demonstrate the representational power of the proposed methods, which achieve competitive or superior performance compared to state-of-the-art counterparts, while improving the computational cost and/or sample efficiency substantially. | [
"optimization",
"categorical variables",
"sample efficiency",
"acquisition methods",
"fourier representations",
"representations",
"categorical variables optimization",
"functions",
"active area",
"research"
] | Reject | https://openreview.net/pdf?id=JdCUjf9xvlc | https://openreview.net/forum?id=JdCUjf9xvlc | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"lA_AciV6kC",
"Kp_ISY2764h",
"X5G1KtYlCX",
"QKXO-GvrY3L",
"51onIB2AFyU",
"noGMzQsyFJ9",
"f5F59yaRs0",
"aKPyautVyYX",
"JQ1CeqY5Vw",
"AuqT4pEgdWr",
"Um-hbeKlCXq"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040383988,
1606264405702,
1605795042542,
1605794996907,
1605794572642,
1605794329877,
1605793617791,
1603963502451,
1603911437654,
1603860190297,
1603526696528
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3577/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3577/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3577/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3577/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3577/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3577/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3577/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3577/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3577/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3577/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper considers the problem of black-box optimization over categorical variables using expensive function evaluations.\\n- Fourier representation is proposed as surrogate model by treating the categorical input as the direct sum of cyclic groups. The parameters are learned using exponentially-weighted update algorithm.\\n- To select the inputs for evaluation, simulated annealing and MCTS are employed as search algorithms to optimize the learned surrogate function. \\n- Experiments are performed on two synthetic problems and RNA sequence design problems.\\n\\nThe proposed fourier representation is novel and the results show the promise of this method in terms of computational-efficiency over state-of-the-art COMBO method.\\n\\nThere are two unsatisfactory aspects of this paper.\\n1. In expensive black-box optimization problems, number of function evaluations to find better solutions is critical. This paper takes a non-Bayesian approach to improve computational-efficiency (over prior Bayesian optimization methods), but this same advantage comes at the expense of sample-efficiency (number of function evaluations) due to lack of exploration. \\n2. In fourier representation, mapping categorical values to different group elements may change which basis are used for modelling. From a practitioner's perspective, it is important to verify that the performance is not significantly affected by this choice. This can be verified empirically. Even though one reviewer raised this point, authors' haven't responded though it is an easy experiment to do.\\n\\nDue to the above shortcomings, the paper is judged to be not ready for publication at the current stage. I strongly encourage to resubmit the paper after addressing the above two concerns.\"}",
"{\"title\": \"Response Overview\", \"comment\": \"We would like to thank all the reviewers for evaluating our submission. We responded in detail to all the comments. In summary, we made the following changes to the manuscript:\\n\\n1. We added experiments on the impact of the surrogate model order (Appendix G).\\n2. We added experiments on the choice of the acquisition method, i.e. MCTS versus SA (Appendix F).\\n3. We added a detailed quantification of the number of terms (experts) in each representation (Appendix B).\\n4. We provided proof that the proposed group-theoretic Fourier representation is unique and complete (Appendix A).\\n5. We expanded our related work section to include important additional references on surrogate models and acquisition methods.\\n6. We provided insights on uncertainty quantification in our algorithm as compared to TS and UCB. A summary of this is added to the Future Work Section and is left for further research.\\n7. We offer an algorithm for pruning experts, which is particularly relevant in problems over higher order models and/or larger numbers of variables (Provided as an additional supplement; can be added to the appendix if necessary).\\n\\nWe would be happy to make further additions/corrections if necessary. In conclusion, we believe that our paper and our proposed representations are not only of interest in both combinatorial black-box optimization and biological sequence optimization/design, but also would find applications in other problems involving functions over categorical variables.\"}",
"{\"title\": \"Response to Reviewer 3 --- Continued\", \"comment\": \"> In each experiment, which m (max model order) is used? ...\\n\\nWe have used $m = 2$ in all the experiments (as mentioned in Section 4). We added a new section in the appendix (Appendix G) to compare the performance of order 2 and order 3 models in the RNA optimization problem. In summary, given the finite evaluation budget of $500$, we only observed minor changes in the performance of the algorithm when increasing the model order to 3. A similar observation was made in other problems. \\n\\nIn the added section (Appendix G), we also included the computation times for both order 2 and order 3 models. As discussed in Section 3, the computational complexity of ECO is linear in the number of experts, which in turn grows exponentially with order $m$. \\n\\nWe further add that we have already developed pruning strategies to select a subset of experts automatically (particularly relevant for problems over higher order models and/or larger numbers of variables), which we did not include in this paper due to space limitations. We added this as an additional supplement (can be added to the appendix if necessary).\\n\\n> Up to the experiment section, the paper is presented in a way that the ultimate goal is to find an optimum as few evaluations as possible ...\\n\\nIn all the experiments, the performance of ECO-F/G have been compared against the baselines as well as state-of-the-art methods in terms of sample efficiency. All the plots shown in the manuscript/appendix depict the minimum function value (y-axis) found versus the number of black-box evaluations (x-axis). The computational advantage of ECO-F/G with respect to COMBO (and Bayesian optimization methods in general) comes as an added bonus, which makes it computationally tractable for problems over larger numbers of variables. \\n\\n> Maybe on Eterna-100 dataset, COMBO is not applicable?\\n\\nWe compared the performance of our algorithms in design problems against a state-of-the-art algorithm specifically developed for that problem, i.e. LEARNA for the RNA design problem. While COMBO, in theory, can be used for the design problem, after some modifications, we are not aware of any such effort in the literature. The major obstacle in utilizing Bayesian optimization methods in general, and COMBO in particular, for design problems is their high computational cost, which makes them impractical for problems over even moderate sequence lengths (let alone larger sequence lengths), which are typically of interest in design problems.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"> SA and MCTS are acquisition function optimizers ...\\n\\nThat is correct. In the draft, we used a different definition for the acquisition function, by which we mean the module that selects a candidate point for black-box evaluation, given the current estimate for the surrogate model. As pointed out by the reviewer, our surrogate model is an estimate (at any given time t) rather than a predictive distribution. Please also see our reply to Reviewer 2 for a detailed formalization of various surrogate-acquisition frameworks.\\n\\n> Since, typically, not much information about an objective is available in advance, choosing a different acquisition function optimizer is not practical ...\\n\\nThe rationale behind the designation of SA and MCTS as acquisition methods for generic BBO and design problems, respectively, is as follows. In the literature, SA is typically used as a baseline and method of choice for the generic BBO problem, whereas MCTS has been commonly used for the design problem. For instance, SA has been considered as a baseline and/or acquisition method in COMEX, BOCS, and COMBO (albeit with a different algorithm than ours). On the other hand, MCTS (i.e. RNA-MCTS as well as its variants) is perhaps the most popular RNA design technique in the literature. \\n\\nHaving said that, we agree that both SA and MCTS can be used for both generic BBO and design problems. We added a new section on the choice of the acquisition methods in the appendix (Appendix F), comparing the performance of ECO-F/G when different acquisition methods (SA or MCTS) are used. In summary, for the RNA optimization problem (generic BBO), the SA variants slightly outperform the MCTS variants, although this performance gap seems to be decreasing over time. For the design problem, on the contrary, the MCTS variants surpass the SA variants. In the latter case, the results are more varied; in some cases, the performance gap is almost non-existent, whereas in others we observe a slight performance gap.\\n\\nFinally, we add that the representational power of the proposed surrogate models is demonstrated via the significant improvements obtained in the performance of ECO-F/G in conjunction with SA and MCTS versus those of vanilla SA and MCTS, respectively.\\nRegardless of the representational power of the surrogate model, the search algorithm does impact the overall performance of any black-box optimization framework. The strong representational power of the surrogate model does not necessarily lead to identical performances when different search methods are used.\\n\\n> In regard to weakness 2, the correspondence between categorical values of a categorical variable and the elements of a group seems arbitrary ... \\n\\nLet $\\\\chi = [k]^n$ be the categorical domain. Let the true function be $f$. For generality, let us consider a complex valued function $f: \\\\chi \\\\rightarrow \\\\mathbb{C}$ where $\\\\mathbb{C}$ is the field of complex numbers. The basis functions are $\\\\psi_{{\\\\cal I}} (x)$. Now, one can view a function as a $[k]^n$-length vector, one entry each for evaluating the function at every point in the domain $\\\\chi$. We denote the vector for function $f$, thus obtained, by $f^{\\\\chi} \\\\in \\\\mathbb{C}^{k^{n}}$. Similarly, denote the vector for evaluations of the basis function $\\\\psi_{{\\\\cal I}}$ by $\\\\psi_{{\\\\cal I}}^{\\\\chi} \\\\in \\\\mathbb{C}^{k^{n}}$. Let $A$ be a matrix created by stacking all vectors corresponding to basis vectors in the columns. Then, the Fourier representation can be written as $f^{\\\\chi} = A \\\\alpha$ where $\\\\alpha$ is the vector of Fourier coefficients in our group-theoretic representation. Now, due to the use of complex exponentials, one can show that $\\\\sum_{x \\\\in [k]^n} \\\\psi_{{\\\\cal I}} (x) \\\\psi_{{\\\\cal I}'} (x) = 0$ if ${\\\\cal I} \\\\neq {\\\\cal I}'$. Therefore, the columns of the matrix $A$ are orthogonal. Hence, $A$ is a full rank matrix. Therefore, our representation is merely representing a vector in another full rank orthogonal basis. Hence, it is unique and complete. This is added to the appendix as a remark (Appendix A).\\n\\nIn the example of group of integers modulo 6, although the numbers 2 and 4 are inverse of one another in the cyclic group defined over a given variable $x_i$ (assuming that $k = 6$), this does not impose any restriction on the values of the function at 2 and 6 for variable $x_i$. In other words, there is no connection between the values of $f$ at $x_i = 2$ and $x_i = 4$ given the values for the remaining variables $x_{-i}$. To be precise, the dimensionality of the representation (or the degree of freedom) is $k^n$ which exactly matches the function dimensionality.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"> Regarding the two proposed representations, isn't the first one (Equation (2)) a special case of the second (Equation (6))? What are the trade-offs in choosing one among them?\\n\\nThe two representations are identical only at $k = 2$ (i.e. the Boolean case). In general, since the terms (experts) in the first representation are monomials, whereas the terms in the second representation are sines and cosines (or complex exponentials), the two representations are completely different.\\n\\nAs we can see from experiments, in some experiments the first representation outperforms the second one, whereas in others we have the opposite scenario. The answer to the question of which representation is better for a given problem is not trivial, and depends on the properties of the black-box function at hand. As a result of this, one interesting direction for future research would be to devise an ensemble model, which would maintain both representations, and would ideally perform at least as well as the best one (or even better than both); this is also pointed out in the Future Work Section.\\n\\n> There are many important and relevant related work (both for surrogate modeling and acquisition function optimization) that are not discussed in the paper ...\\n\\nWe thank the reviewer for pointing out the missing citations. We added the following descriptions to the Related Work section in the draft.\\n\\n- [1] suggests a surrogate model based on random forests to address optimization problems over categorical variables. The proposed SMAC algorithm uses a randomized local search under the expected improvement acquisition criterion in order to obtain candidate points for black-box evaluations. [2] suggests a tree-structured Parzen estimator (TPE) for approximating the surrogate model, and maximizes the expected improvement criterion to find candidate points for evaluation.\\nFor optimization problems over Boolean variables, multilinear polynomials [BOCS, COMEX] and Walsh functions [3] have been used in the literature.\\n\\n- As an alternative to parameter free search methods (such as SA), [4] suggests to use a parameterized policy to generate candidates that maximize the acquisition function in Bayesian optimization over discrete search spaces. Our MCTS acquisition method is similar in concept to [4] in the sense that the tabular value functions are constructed and maintained over different time steps. However, we are maintaining value functions rather than a policy network.\\n\\n> The proposed one-hot encoding in Equation (2) is said to have \\\"far less terms than a vanilla encoding\\\". Please provide a quantitative description of this reduction of number of terms.\\n\\nWhen all the terms up to max degree of $n$ are used, the number of terms in vanilla one-hot representation is $2^{kn}$, whereas our representation reduces this number to $k^n$ matching the space dimensionality, thereby making the algorithm computationally tractable and efficient. As an example, in the RNA optimization problem with $k = 4$ and $n = 30$, this leads to a reduction of terms by a whopping factor of $\\\\approx 1.267 \\\\times 10^{30}$. As stated in Theorem 1, despite this significant reduction, the resulting representation is in fact unique and complete, and we are not losing any information in the process.\\nWhen a max degree of $m$ is used in the approximate representation, the number of terms (as mentioned in the appendix) in our proposed representation is equal to $d = \\\\sum_{i=0}^m \\\\binom{n}{i} (k-1)^i$. This number in a vanilla one-hot encoded representation would be equal to $\\\\sum_{i=0}^m \\\\binom{nk}{i}$. We added the full description on the number of terms to the appendix (Appendix B).\\n\\n> It is not entirely clear how the representation for a real-valued function reduces to (8) from (6) ... \\n\\nSince the black-box function is assumed to be real-valued, the imaginary part of the function has to be zero. As a result, we discard the imaginary part of the representation, and only consider the real part.\\n\\n> Is the choice of n=30 for RNA sequence optimization motivated by some real-world implication?\\n\\nFrom a biological standpoint, this is the max length of short RNA sequences that play a number of regulatory roles from plants to animals, which imply their involvement in fundamental cellular processes (see D. P. Bartel, \\u201cMicroRNAs: genomics, biogenesis, mechanism, and function,\\u201d Cell, vol. 116, no. 2, pp. 281\\u2013297, 2004.).\\n\\nIn our experiments, we picked the sequence length of $n=30$ since COMBO could be run in about $24$ hours (for comparison purposes). We note that for higher sequence lengths, COMBO becomes computationally impractical, and this is one of the advantages of our proposed ECO algorithm.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"> On the one hand, assuming a surrogate function that is equipped with this ability to generalize ...\\n\\n(We added a summary of the following insights to the Future Work Section, i.e. Section 5.)\\n\\nTo answer this question let us consider three acquisition strategies a) UCB b) Thompson Sampling and c) Our SA Algorithm.\\n\\nLet $p_t(w)$ be the posterior distribution on the linear coefficients in $\\\\textit{any}$ of our representations with respect to a prior $p_0(w)$. Suppose $p_t(w)$ is obtained using a standard Bayesian inference procedure given past data and the corresponding evaluations. In fact, for the Boolean case, the BOCS algorithm operates in this setting. Let $m_t[x]$ be the mean function due to the posterior (which is a function of the categorical vector $x$). Let $\\\\sigma_t[x]$ be the standard deviation under the posterior $p_t$ at a point $x$. Let $f_w[x]$ be the categorical function when weight $w$ is used in the representation. Let $f^h_t(x)$ be our hedge surrogate at time $t$.\", \"the_three_acquisition_strategies_will_look_as_follows\": \"$\\\\textbf{UCB}$: $\\\\mathrm{argmax}_{x \\\\in [k]^n } m_t[x] + \\\\gamma_t \\\\sigma_t[x]$.\\n\\n$\\\\textbf{TS}$: Sample $w \\\\sim p_t(\\\\cdot)$. Then compute $\\\\mathrm{argmax}_{x \\\\in [k]^n} f_w(x)$.\\n\\n$\\\\textbf{Our SA}$: $\\\\mathrm{argmax}_{x \\\\in [k]^n} f^h_t(x) + \\\\gamma_t n(x)$ where $n(x)$ is sampled i.i.d for every $x$ from a Gumbel Distribution.\\n\\nWe are using the property that Gibbs sampling over discrete domain is equivalent to logistic sampling which can be done using Gumbel softmax. \\n\\nIn $\\\\textit{all}$ the above, the acquisition strategy involves an optimization over the categorical domain of some categorical function which itself is a hard problem. Hence, it is non-trivial to find the argmax in the UCB and TS cases as each leads to another combinatorial optimization problem over a categorical domain. Please note that the argmax in our SA method is approximated via Gibbs sampling. In fact, SA could be used further to approximate it by adding a further uncertainty term! Representation of $f_w[x], m_t[x], \\\\sigma_t[x]$ is a crucial component even if one were to contemplate combinatorial optimization routines for categorical domains.\\n\\nIn our case, we don't use a Bayesian posterior mean function but rather an online approximator learnt using Hedge. Hedge algorithm has $\\\\textit{strong adversarial guarantees}$ (please see the COMEX paper for theoretical results in the Boolean case). In other words, given any additional black box evaluation, it is guaranteed to move closer to the true black-box model in some distance; This carries over to our setting as well. However, there is a $\\\\textit{domain independent}$ exploration bonus due to $n(x)$ being sampled i.i.d from the same distribution regardless of $x$. The terms that account for uncertainty in both TS and UCB depend on $x$.\\n\\nIn conclusion, domain dependent uncertainty incorporation is left for future work. However, it is non-trivial to do so efficiently due to the argmax operation over a categorical domain. Our main contribution is the representations of the surrogate model which is learnt using hedge in an adversarial setting. We believe our contribution would form the basis for further work irrespective of the framework adopted (as can be seen above).\\n\\nFinally, we point out that the applicability of the proposed representations is not limited to the surrogate model learning with expert advice (i.e. ECO). For instance, the proposed representations can be used in a BOCS-type algorithm with other acquisition frameworks like UCB and TS. \\n\\n> In terms of experimentation, since this paper introduces a new decomposition that is complete and unique, I would've expected to see some results concerning how well a truncated decomposition fits a known function of categorical variables ...\\n\\nAs suggested by the reviewer, we added a section in the appendix (Appendix G), comparing the performance of order 2 and order 3 models for both ECO-F and ECO-G in the RNA optimization problem. Please also see our responses to Reviewers 1 and 3.\"}",
"{\"title\": \"Response to Reviewer 1\", \"comment\": \"> The paper proposes two representations, namely one-hot encoded Boolean expansion ...\\n\\nWe propose an abridged version of one-hot encoded Boolean Fourier expansion rather than the vanilla one-hot encoded Boolean Fourier expansion, and this is novel to our work, to the best of our knowledge.\\n\\n> The paper seems not to have clearly abundant novelty ...\\n\\nThe novelty of our work (in part; see Contributions in Section 1 for a complete list) is to propose two representations for modeling functions over purely categorical variables. Such functions are of wide interest in the context of real-world applications, e.g. in the design and optimization of chemical or biological molecules. \\nOf the two representations we propose, the abridged one-hot encoded Fourier representation is novel to this work; Fourier representation on Abelian groups has not been previously used as a surrogate model for functions over purely categorical variables. The number of terms in vanilla one-hot representation (typically used in literature to convert categorical problems to Boolean ones) is $2^{kn}$, whereas our representation reduces this number to $k^n$ matching the space dimensionality, thereby making the algorithm computationally manageable and efficient. This leads to a unique and complete representation -- superior/competitive performance of ECO is shown in experiments with up to $100\\\\times$ speed-up with respect to COMBO.\\n\\nWe believe that the applicability/significance of the proposed representations go far beyond the framework of black-box optimization with expert advice and can be used in other optimization algorithms and contexts. \\nOne avenue is to develop a BOCS-like algorithm for optimization over categorical variables using our representations.\\nAnother avenue is applications in reinforcement learning (RL): given that the elements of action and state spaces can be expressed as vectors of categorical variables, our representations can be used $(i)$ in order to model reward functions in model-based RL and $(ii)$ as linear value function approximators.\\n\\n> The one-hot Boolean expansion could have a certain degree of lack of scalability if the order of approximation is large ...\\n\\nWe agree with the reviewer that utilizing one-hot encoded representations in order to convert Boolean representations to categorical ones is not scalable and would lead to too many terms. In fact, this is exactly the motivation behind our proposed representations. The main idea of our first representation is to overcome such shortcomings in one-hot encoding. We introduce an abridged version of one-hot encoded Boolean Fourier expansion, where the number of terms has been significantly reduced in comparison to the vanilla counterpart (please see our response to Reviewer 4 and a detailed description of the number of experts in Appendix B). Despite this significant reduction, we prove that the resulting representation is in fact unique and complete. Our second representation provides an alternative where we define a group structure among categorical variables to avoid one-hot encoding completely. To the best of our knowledge, both representations for modeling black-box functions over categorical variables are novel to this work.\\n\\nWe further add that we have already developed pruning strategies to select a subset of experts automatically (particularly relevant in problems over higher order models and/or larger numbers of variables), which we did not include in this paper due to space limitations. We added this as an additional supplement (can be added to the appendix if necessary).\\n\\n> The one-hot encoded Boolean function and group-theoretical expansion should have applications that they fit well and problems that they are less applicable ...\\n\\nIn general, both proposed representations are exact and complete. As such, both representations would fit any function over purely categorical variables. In our experiments, we have truncated the representations to a max degree of two. In many applications, a low-order model is sufficient to capture the interactions among different variables (and are naturally a better fit to our representations). Although this could potentially lead to approximation errors in some applications where higher-order interactions are present, this typically allows for trading off approximation accuracy for scalability.\\nWe add that, in existing physics-based functions the majority of energy is concentrated in lower-order terms, rendering low-order approximations justifiable. As a result, in practice, it is highly unlikely/rare to see any benefits at $m \\\\geq 4$. Finally, we point out that similar observations were made in BOCS and COMEX papers for the Boolean problem.\\n\\nWe added a Section in the appendix (Appendix G), where we compare the performance of order 2 and order 3 models for the RNA optimization problem. Please also see our response to Reviewer 3.\"}",
"{\"title\": \"Official review\", \"review\": \"The paper proposes two representations, namely one-hot encoded Boolean expansion and group-theoretical Fourier expansion, for the surrogate model used for the black-box evaluations on purely categorical variables. With the two surrogate models, the authors tackle both the black-box optimization problem and the design problem. Two forms of acquisition functions are applied for query selection \\u2013 simulated annealing and Monte Carlo Tree Search. The new algorithms are compared with the existing methods in simulations and have advantage for the objective value and speed-up.\\n\\nThe paper has many contributions, and I would be inclined to recommend the paper for acceptance, after the rebuttal.\\n\\nMy concerns are three-folds.\\n1. The paper seems not to have clearly abundant novelty, as far as I understand. The author should highlight what is new for one-hot encoded function and group-theoretical Fourier expansion. Also, the simulated annealing and Monte Carlo Tree Search are not fully novel and I seem not to fully see the major changes to these methods. It would be great if the authors could list the major changes to these algorithms to fully show novelty.\\n2. The one-hot Boolean expansion could have a certain degree of lack of scalability if the order of approximation is large. If the order $m$ has a large value, then all $m$-subset of set $[n]$ might have too many terms for the representation to be efficient. A method to prune the $m$-subset should be needed. \\n3. The one-hot encoded Boolean function and group-theoretical expansion should have applications that they fit well and problems that they are less applicable. It seems critical to identify the superb properties of these functions than other alternatives, and a rough range of the problems that work better with these surrogate functions. Without such identification, the surrogate functions have reduced significance.\\n\\nPlease fix the following unimportant typos. The variable $j$ is overloaded in Eq (7), as it is both the complex number unit and the integer pair. The r and i subscript in Eq (8) are not defined, for real and imaginary part of the function $f_{\\\\alpha}(x)$.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The authors offer a Fourier representation for categorical variables that allows for generalization of observations of the response surface across categories.\", \"review\": \"This seems like a very interesting paper and perhaps quite impactful indeed, if it achieves what it claims to. Unfortunately, assessing the novelty and merit of this paper relies heavily on expertise in Group theory, which I certainly lack. Here are some scattered thoughts, for whatever they're worth.\\n\\nOn the one hand, assuming a surrogate function that is equipped with this ability to generalize, leveraging MCTS and the UCT selection criterion as an acquisition function seems reasonable to me. On the other hand, it seems to me that using SA, targeting a tempered surrogate, might be too greedy and not align with the latest approaches in black box optimization, where some measure of uncertainty is used in the acquisition decision-making process. It would be good to have a discussion on how one could obtain an uncertainty quantification from the decomposition (e.g. uncertainties around each coefficient alpha and how that extends to uncertainty in f).\\n\\nIn terms of experimentation, since this paper introduces a new decomposition that is complete and unique, I would've expected to see some results concerning how well a truncated decomposition fits a known function of categorical variables. Some toy experiments would provide valuable evidence that the surrogate learned from such a truncated decomposition is likely good even for relatively short truncations.\\n\\nAs I cannot pass judgement on the novel aspects of this paper, I will be generous with my score and let my confidence score reflect my lack of expertise. Looking forward to reading other reviews and comments on this manuscript.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"1: The reviewer's evaluation is an educated guess\"}",
"{\"title\": \"Official Review #4\", \"review\": [\"Paper Summary\", \"The paper considers the problem of black-box optimization of expensive functions defined over categorical variables. A surrogate model-based optimization approach is proposed to tackle this problem. Fourier representations are proposed as surrogate model by treating the categorical input as the direct sum of cyclic groups Z/kZ (k is the arity/category size). The coefficients of this representation are learned via exponentially-weighted update rule. The selection of each subsequent input for evaluation is performed via direct optimization of the surrogate model built over inputs collected previously. Simulated Annealing and Monte Carlo Tree Search is proposed as the acquisition function optimization procedure for unconstrained and constrained problems respectively. Experiments are performed on two synthetic problems and RNA-sequence optimization.\", \"Detailed Comments\", \"The paper considers an important problem which has multiple applications (for e.g. biological sequence design) in practice.\", \"Although the proposed method is a natural generalization of the COMEX [5] approach which was proposed for binary variables, it shows good performance on two benchmarks and can be useful for end users because of its simplicity.\", \"Regarding the two proposed representations, isn't the first one (Equation (2)) a special case of the second (Equation (6))? What are the tradeoffs in choosing one among them?\", \"There are many important and relevant related work (both for surrogate modeling and acquisition function optimization) that are not discussed in the paper. Please provide a detailed discussion comparing proposed approach with the below methods otherwise it comes across as if there is limited existing work for the considered problem. They are very relevant to the paper because all of them consider the setting of \\\"small\\\" data with expensive function evaluations.\", \"Surrogate modeling\", \"SMAC [1] is the most natural approach that handles categorical variables nicely.\", \"Tree structured Parzen Estimator (TPE) [2] is another approach that can easily handle categorical variables.\", \"Walsh functions [3] have been used effectively for surrogate modeling over discrete variables.\", \"Acquisition function optimization\", \"Amortized Bayesian Optimization over Discrete Spaces [4]\", \"The proposed one-hot encoding in Equation (2) is said to have \\\"far less terms than a vanilla encoding\\\". Please provide a quantitative description of this reduction of number of terms.\", \"It is not entirely clear how the representation for a real-valued function reduces to (8) from (6). Do we just ignore the complex part similar to the common approach in random Fourier features used for kernel methods? Please provide a clear derivation.\", \"Is the choice of n=30 for RNA sequence optimization motivated by some real-world implication?\", \"References\", \"[1] Hutter, F. and Hoos, H. H. and Leyton-Brown, K. Sequential Model-Based Optimization for General Algorithm Configuration In: Proceedings of the conference on Learning and Intelligent OptimizatioN (LION 5)\", \"[2] Bergstra, J. S., Bardenet, R., Bengio, Y., & K\\u00e9gl, B. (2011). Algorithms for hyper-parameter optimization. In Advances in neural information processing systems (pp. 2546-2554).\", \"[3] Lepr\\u00eatre, F., Verel, S., Fonlupt, C., & Marion, V. (2019, July). Walsh functions as surrogate model for pseudo-boolean optimization problems. In Proceedings of the Genetic and Evolutionary Computation Conference (pp. 303-311).\", \"[4] Swersky, K., Rubanova, Y., Dohan, D., & Murphy, K. (2020, August). Amortized Bayesian Optimization over Discrete Spaces. In Conference on Uncertainty in Artificial Intelligence (pp. 769-778). PMLR.\", \"[5] Dadkhahi, H., Shanmugam, K., Rios, J., Das, P., Hoffman, S., Loeffler, T. D., & Sankaranarayanan, S. (2020). Combinatorial Black-Box Optimization with Expert Advice. arXiv preprint arXiv:2006.03963.\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The idea is interesting but there are some theoretical and empirical parts that need to be clarified more.\", \"review\": \"**Summary**\\nThis paper proposes a model-based black-box function optimization on purely categorical variables. Two different representations for categorical variables are proposed, one is an improved pseudo-boolean function form capable of representing non-binary categorical variables in a compact way and another is to rely on (mathematical) group representation theory after mapping each categorical variable to a cyclic group. In acquisition function optimization, SA is used for generic BO and MCTS is used for design problems. The proposed EGD-F/G are compared with baselines and shown to have competitive computational efficiency with comparable performance.\\n\\n\\n**Strengths**\\n1. A compact representation for general (including non-binary) categorical variables is proposed and the strengths of the compact representation are theoretically supported, which is an improvement upon a bit ad-hoc approach to handle non-binary categorical variables proposed in BOCS.\\n2. The proposed algorithm is more efficient than COMBO with comparable performances. EGO-F/G may find more useful applications in problems where more evaluations are affordable than classical BO settings.\\n\\n\\n**Weaknesses**\\n1. EGO-F/G does not provide explicit uncertainty in its surrogate modeling, which is crucial in balancing exploitation-exploration trade-off. And the acquisition function is kind of just predictive mean of the surrogate model, it seems that balancing exploitation-exploration is the full responsibility of the acquisition function optimizer, which makes me think that the performance is attributed more to different choices of an acquisition function optimizer and less to surrogate models. However, if there is some stochasticity in surrogate model training, then this can be interpreted as Thompson sampling as NN trained with SGD can be regarded as a posterior sample, then it seems OK to say that surrogate model plays its part for exploitation-exploration trade-off. It would be good if the authors can clarify this. \\n2. Since, typically, not much information about an objective is available in advance, choosing a different acquisition function optimizer is not practical. Even though the test problems are divided into generic BBO and design problems, it seems that both SA and MCTS can be used in all experiments. For example, SA for the constrained problem can simply mask softmax values corresponding to invalid ones and MCTS for unconstrained ones is easier to adapt. If EGO-F/G is shown to be less sensitive to the choice of the acquisition function optimizer, then the argument for the strong representational power of the surrogate models will be supported more strongly. Therefore, it is recommended that the authors compare SA and MCTS in experiments where possible as BOCS(Ricardo Baptista, 2018) compares BOCS-SA and BOCS-SDP. \\n3. In spite of the benefits from Fourier transform on finite Abelian groups, giving a cyclic group structure to each categorical variable impose a random structure on values of the categorical variable. For example, if a categorical variable has 6 categories and is mapped to Z_6, then the categorical value corresponding to 2 and the categorical values corresponding to 4 are the inverse to each other due to the mapped group structure but this relation is not natural with regard to original categorical values. I was not able to find any theoretical/empirical analysis in the paper.\\n4. Up to the experiment section, the paper is presented in a way that the ultimate goal is to find an optimum as few evaluations as possible. However, in synthetic benchmarks, EGO-F/G are argued to be better than baselines because of the computational efficiency, which sounds a bit contradicting.\\n\\n\\n**Recommendation**\\nBecause of the concern on the consistency of the performance from using different acquisition function optimizers, the empirical demonstration has points to be improved. And even though a mathematically elegant expert is given by Fourier transform on a finite abelian group, the correspondence between a categorical variable and a (finite) cyclic group needs more investigation. Therefore, in spite of its interesting idea, I think the paper needs some improvement for acceptance.\\n\\n\\n**Questions**\\n- SA and MCTS are acquisition function optimizers, not acquisition function itself. In contrast to typical BO where the acquisition function is a function of the predictive distribution given by the surrogate model, the acquisition function of EGO-F/G is the surrogate model itself (in other words, identity function on kind of predictive mean), isn't it?\\n- In regard to weakness 2, the correspondence between categorical values of a categorical variable and the elements of a group seems arbitrary. Does this mapping affect performance? Do you have any reasonable interpretation of this?\\n- In each experiment, which m (max model order) is used? Does the choice of m have a significant impact on runtime and optimization performance?\\n- Maybe on Eterna-100 dataset, COMBO is not applicable?\\n\\n\\n**Additional feedback** (Irrelevant to the decision assessment)\\n- The last sentence of the first paragraph of the introduction is a bit confusing. Since mixed variable problems include pure categorical ones as subproblems, intuitively, mixed variable problems look more challenging than purely combinatorial ones.\\n- It would be better if 'Proof, see appendix' is added under thm 3.1.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
D9pSaTGUemb | Implicit Acceleration of Gradient Flow in Overparameterized Linear Models | [
"Salma Tarmoun",
"Guilherme França",
"Benjamin David Haeffele",
"Rene Vidal"
] | We study the implicit acceleration of gradient flow in over-parameterized two-layer linear models. We show that implicit acceleration emerges from a conservation law that constrains the dynamics to follow certain trajectories. More precisely, gradient flow preserves the difference of the Gramian~matrices of the input and output weights and we show that the amount of acceleration depends on both the magnitude of that difference (which is fixed at initialization) and the spectrum of the data. In addition, and generalizing prior work, we prove our results without assuming small, balanced or spectral initialization for the weights, and establish interesting connections between the matrix factorization problem and Riccati type differential equations. | [
"gradient flow",
"implicit acceleration",
"difference",
"overparameterized linear models",
"linear models",
"conservation law",
"dynamics",
"certain trajectories",
"input"
] | Reject | https://openreview.net/pdf?id=D9pSaTGUemb | https://openreview.net/forum?id=D9pSaTGUemb | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"0SX78josca",
"zsCNV3qUz1u",
"7ChSa8QLkAx",
"Rd68ELY8T6",
"e4UBdIHTy1v",
"b7S7YqsqdUO",
"3XfTUXgqATh",
"9bhtVOMNk4Q",
"EzZ0Ro8mEN7"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040356808,
1606302218817,
1606301315874,
1606300514887,
1606299643164,
1604274783401,
1603942884265,
1603896985666,
1603817712024
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3576/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3576/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3576/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3576/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3576/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3576/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3576/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3576/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"This paper studies the implicit acceleration of gradient flow in over-parameterized two-layer linear models. The authors show that the amount of acceleration depends on the spectrum of the data without assuming small, balanced, or spectral initialization for the weights, and establish interesting connections between matrix factorization and Riccati differential equations. While this paper provides some interesting results regarding implicit acceleration in training linear neural networks, the reviewers raised quite a few questions and concerns about some claims made in the paper, as well as an inadequate comparison with previous work. Even after the author's response and reviewer discussion, the reviewers' doubts are still not completely cleared away. I feel the current form of the paper is slightly below the bar of acceptance, and encourage the authors to carefully address reviewers' comments in the revision.\"}",
"{\"title\": \"Response to AnonReviewer4\", \"comment\": \"Thank you for the comments on our paper. Below we address some of the issues raised.\\n\\n- **On the relevance of continuous-time comparisons of different parameterizations.**\\n\\nWe understand the reviewer's concern but we would like to highlight the fact that a discretization and its underlying ODE are always tied together and one is not free to change one without changing the other. The reviewer's argument of reparametrizing $X \\\\to 2X$ into the ODE is equivalent to also changing the discrete-time algorithm. The rescaled GF reads $2 \\\\dot{X} = Y - 2X$. Its discretization is $X_{k+1} = X_k + (\\\\eta/2)(Y - 2 X_k)$, which is not the same as the original $X_{k+1} = X_k + \\\\eta (Y - X_k)$. The solution of the rescaled GF is $X(t) = Y/2 + (X(0) - Y/2) e^{-t}$, which gives a rate of $O(e^{-t})$. The discretization yields $X_{k+1} = (1-\\\\eta) X_k + (\\\\eta/2) Y$, which has a convergence rate of $O(e^{- \\\\eta k})$; note that both discrete- and continuous-time rates match. We see no improvement by this rescaling argument. \\n\\nTo provide more intuition for the reader, we included a specific example in Appendix G in\\nthe manuscript, showing that the nonlinear GF considered in the paper closely preserves\\nthe convergence rates of GD under the same problem. We kindly request\\nthe reviewer to check this discussion.\\n\\n- **\\\"The convergence results require strong assumptions (parameter and data can be diagonalized simultaneously)... The authors may also need to compare the convergence rates of the derived results and those in the following papers.\\\"**\\n\\nThe spectral initialization condition (i.e. where the matrix of parameters and the data can be diagonalized simultaneously) is actually not necessary to prove convergence, and was relaxed in Section 4. Moreover, our results do not make any assumptions on the distribution of the initialization since the only quantity that matters is $||U_0^TU_0 - V_0^TV_0||$. Regarding the listed papers, we have included some of them in the revision (the ones that we find relevant connections). Note that [2] and [3] analyze residual networks and [1] and [4] require sufficiently wide layers to prove convergence. In contrast, our analysis holds as long as the width is larger than the dimensions of the data.\\n\\n- **On the regularization term.**\\n\\nIndeed, the regularization term was used in these two references, and the reason was to ensure balancedness and extend the convergence results for symmetric factorizations to asymmetric ones. We noticed this connection as well, and extended it to imbalanced initializations of the type $\\\\Lambda_{\\\\mathcal{Q}_0} = \\\\lambda_0 I$. \\nIn our work, we rather provide an explanation for the origin of this regularization\\nterm, namely it arises as a consequence of relaxing the spectral\\ninitialization condition and requiring that the trajectories of both regularized\\nand unregularized problems be the same (this was not a concern in previous works).\\nNote that in the general case the dynamics is described by\\nEq. (18), which turns out to be equivalent to a gradient flow applied \\nto the objective in Eq. (20).\\nIn short, while previous works simply introduced this term to enforce\\nbalancedness, our results give a formal explanation of why the regularization\\nhas to be in this form (including the $1/8$ factor). \\n\\n- **Is overparameterization necessary? Can the results be extended to the low-rank case?**\\n\\nOur results show the effects of overparameterization in the sense of increasing width, i.e. in going from one layer, $X$, to two layers, $UV^T$. There are no constraints on the width, $k$, except that it must be larger \\nthan $\\\\min(m,n)$ to avoid rank deficiency.\\nHowever, the width $k$ appears indirectly through the quantity $||U_0^TU_0 - V_0^TV_0||$ and it thus may affect the convergence rate. When the data is rank deficient,\\nsay rank $r$, the dynamics can still be described using Riccati equations and our analysis and results would hold if $k \\\\ge r$, as long as the initial weights are imbalanced; the imbalance offsets the rank deficiency in the Riccati solution.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you for the feedback and comments.\\n\\n- **\\\"Page 2, (2): why call (2) \\u2018a symmetric one-layer linear model\\u2019? Does that suggest $m=n$? The same issue holds for (3). Is $Y$ in (3) the same as that in (2)?\\\"**\\n\\nYes, thank you for catching that typo. We have fixed that in the revision. The matrices are symmetric thus $m=n$ in Section 2.\\n\\n- **On the difference between convergence rates in the symmetric and asymmetric case and the role of the initialization.**\\n\\nThis is a good observation, thank you. In the (general) asymmetric case, the convergence depends on the conserved quantity $||U_0^T U_0 - V_0^T V_0||$ and also on the data spectrum. When the initialization is balanced, \\ni.e. $\\\\| U_0^TU_0 - V_0^TV_0 \\\\| = 0$, then only the data spectrum survives in the convergence rate. In the symmetric case, one must necessarily have\\nbalanced initialization (since $V=U$) and this is why the convergence\\nrate does not depend on the initialization. \\n\\n- **What is the convergence behaviour when $\\\\sigma \\\\approx 0$?**\\n\\nIndeed, it is possible for that scenario to occur when the initial weights are balanced and convergence only depends on the data spectrum. However, as we have shown in the paper, an imbalanced initialization accelerates the convergence of even the smallest components.\\n\\n- **\\\"In Proposition 5, the authors build the convergence results for the case $\\\\Lambda_{\\\\mathcal{Q}_0} = \\\\lambda_0 I$ for the general non-spectral initialization case. However, I hardly can find a non-spectral initialization case satisfying such a condition. Can the authors provide some examples?\\\"**\\n\\nWe do not expect this condition to be fully satisfied in practice. Such a condition was a necessary assumption in order to make progress on this problem from\\na mathematical standpoint, namely to obtain closed form solutions that characterizes the dynamics exactly. \\nImportantly, however, we have reasons to expect that\\nthis idealized case does capture a more general behaviour with arbitrary initializations. This was indeed verified numerically in Section 5 where we illustrate the effect of imbalance. In fact, we did extensive numerical explorations to verify that the dynamics under the assumption $\\\\Lambda_{Q_0} = \\\\lambda_0 I$ are general enough to capture the qualitative behaviour of any asymmetric factorization problem, after the parameter $\\\\lambda_0$ is appropriately tuned (i.e. we claim that the dynamics with an arbitrary initialization can be reproduced by dynamics under \\nthe assumption $\\\\Lambda_{\\\\mathcal{Q}_0} = \\\\lambda_0 I$ with a suitable\\n$\\\\lambda_0$). In Figure 2 we illustrated this for a few cases.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"We thank the reviewer for the positive feedback and constructive comments which we have taken into account in the revision. Here we address some of the questions raised.\\n\\n3.1. Thank you for pointing out these references. We note that our imbalanced setting is more general than the approximate balancedness condition in the first reference. In fact, in the case of gradient flow, convergence is guaranteed and even accelerated as the weights become more imbalanced. The second reference assumes strong alignment which is similar to the spectral initialization assumption and is a particular case of our analysis.\\n\\n3.2. The analysis of deeper networks is more involved since weights from different layers start to get \\\"entangled\\\" through the factorization which is manifested through complicated nonlinearities and extra coupled differential equations. \\nSpecific comments in this direction require a complicated analysis, which can be an interesting (and challenging) problem for future work (as the reviewer already pointed out, it is not obvious how to do this). Nevertheless, the ideas proposed in this paper provide a starting point. Let us mention that, at this stage, we do know that each two consecutive layers will create conserved quantities, therefore we conjecture that the convergence rate will depend on the accumulated imbalance from each added layer. Perhaps the basic mechanism that we introduce in this paper happens in multiple-scale fashion. Anyhow, these comments are still speculative and we do not have a proper mathematical answer.\\n\\n3.3. We would like to invite the reviewer to also check the answer that we provided to reviewer 4 regarding a \\nrelated question. \\n\\nWe stress that any reasonable discretization of a continuous system is expected to capture its behaviour up to some level of accuracy. Thus, as long as the discretization is stable, one may expect on general terms that the discrete-time convergence rate will be approximately the same as the continuous one (with potentially some source of error which is small).\\n We mentioned this recent work of Franca, Jordan and Vidal (2020) since they propose the first general framework where continuous-time convergence rates\\ncan be automatically preserved through a class of discretizations (symplectic integrators). This was done for general (dissipative) Hamiltonian systems by exploring the consequences of the underlying symplectic\\nstructure together with backward-error analysis \\n(a powerful technique in numerical analysis of ODEs). Note that these systems are much more\\ngeneral than the simple gradient flow considered here, however we mention that gradient flow can be seen\\nas a high-friction limit of a classical Hamiltonian system with damping, and\\ncorrespondingly,\\ngradient descent can be seen as a high friction limit or a particular \\nsymplectic integrator.\\nIn this sense, there is a relation with the mentioned work.\\nNevertheless, the gradient flow is actually much simpler than these general Hamiltonian systems\\nand one can show directly that gradient descent closely preserves the gradient flow rates.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We thank the reviewer for the feedback and comments. Here we address the specific questions that are raised.\\n\\n**(a) Is there any provable advantage of over-parameterization?** \\n\\nThe goal of the paper is to show the effect of overparameterization by replacing $X$ by its factorized form $UV^T$. As a consequence, the number of parameters increases from $mn$ to $k(m+n)$ with $k \\\\geq \\\\min(m,n)$. Our results show that the convergence rate can depend on the width $k$ of the factorization (the number of columns in $U$ and $V$) indirectly through the quantity $||U^TU - V^TV||$ and the data spectrum; such quantities do not play a role in the non-factorized problem. \\nWe are not certain about what the reviewer means by 'properly parameterized' $U$ and $V$. \\n\\n**(b) Sections 3 and 4 are more novel than section 2.** \\n\\nWe agree that the main conclusions of our paper are drawn from Sections 3 and 4. The reason for Section 2 is twofold. First, to provide a warm up and entry point to the reader. Second, to introduce important tools and theorems that will be used in the following sections. For example, in Section 4, we establish a connection between the dynamics of symmetric and asymmetric matrix factorizations and show that the latter can also be reduced to a Riccati equation of the same type as the former. The solution and convergence rate are thus deduced through a similar analysis.\\n\\n**(c) Can the results extend to settings with sampling error like matrix completion?**\\n\\nIn a matrix completion context we believe the reviewer refers to introducing a linear sampling operator $\\\\mathcal{A}(U V^T)$.\\nNote that our formulation has some generality since the conservation law is the result of a symmetry and holds for any objective function of the type $f(UV^T)$. Therefore, we expect that similar conclusions still hold and should extend naturally to the case of introducing such a linear operator.\\n\\n**\\\"I do not quite understand the claims in Figure 2. You explained in Section 4 that you need $K \\\\leq m+n$ for identity initializations and Proposition 5, but here you say that k ranges from 50 to 200? Please clarify.\\\"**\\n\\nIndeed, in Section 4, we restricted the analysis to the case of a scaled identity imbalance because it is the only setting where we can obtain closed form solutions. However, Figure 2 shows that, even without such an assumption, our results hold empirically to general settings. Moreover, in the bottom row of Figure 2, we show that the condition $\\\\Lambda_{\\\\mathcal{Q}_0} = \\\\lambda_0 I$ is general enough to capture the qualitative behaviour of the dynamics under any type of imbalance. In fact, the figure shows that we can approximate any matrix factorization problem with the dynamics under $\\\\Lambda_{\\\\mathcal{Q}_0} = \\\\lambda_0 I$ for a suitable $\\\\lambda_0$.\"}",
"{\"title\": \"A detailed analysis of gradient flow in two-layer linear neural networks\", \"review\": \"Summary of review:\\n\\nThis paper provides a detailed analysis of gradient flow in (over-parametrized) two-layer linear neural networks. The main results state the precise dynamics of gradient flow for both symmetric and asymmetric matrix factorization, starting from certain spectral initialization. One novel insight that stems from the analysis is that for asymmetric matrix factorization, \\\"imbalanced initializations\\\", where the left and right singular values of the iterates differ, converges faster than \\\"balanced initializations\\\". Simulations further validate this insight.\", \"setting\": \"(i) Symmetric matrix factorization: Given a symmetric matrix Y, the problem is to solve a mean squared loss between Y and UU^T to factorize Y, where U is a (possibly over-parametrized) variable matrix.\\n\\n(ii) Asymmetric matrix factorization: The asymmetric setting considers an asymmetric matrix Y and the asymmetric factorization of UV^T for factorizing Y.\\n\\nResults\\n\\n(i) This paper focuses on the convergence of the gradient flow of U for minimizing the mean squared loss, starting from spectral initializations. Informally, a spectral initialization has the same eigenspace as Y.\\n\\nFor these spectral initializations, the gradient flow paths essentially become coordinate-wise updates over every singular value. Then, the authors went on to state the precise dynamics of gradient flow for every singular value. The results imply that \\\"large\\\" singular values (of Y) converge faster than \\\"small\\\" singular values (of Y) in gradient flow.\\n\\n(ii) This paper begins by studying spectral initializations, where $||U^TU - V^TV||_F$ is small, then discusses how to generalize their result to non-spectral initializations.\\n\\nFor spectral initializations, this paper observes crucially that $U^TU - V^TV$ is preserved throughout gradient flow. Hence if this quantity starts small, it will remain small throughout gradient flow. Furthermore, for \\\"large\\\" singular values of $U^TU - V^TV$, the results imply that these singular values converge faster than \\\"small\\\" singular values of $U^TU - V^TV$.\\n\\nFor non-spectral initializations, this paper observes that $U^TU - V^TV$ is still preserved during gradient flow, but this quantity now depends on how \\\"balanced\\\" the initializations of U, V are.\", \"criticism\": \"It would help improve my understanding of this paper if the authors clarify the following questions.\\n\\n(a) The \\\"acceleration\\\" claim of this paper comes from comparing the results to a linear model baseline. However, I am not completely sold on this comparison. For example, what would the results imply if compared to properly parametrized U (and V)? Is there any provable advantage of over-parametrization in this setting?\\n\\n(b) The result of Corollary 1 for symmetric matrix factorization, where larger eigenvalues converge faster than smaller eigenvalues, also appears in linear regression. In particular, the gradient flow of linear regression also shows similar patterns, where larger eigenvalues (of the sample covariance matrix) converge faster than smaller eigenvalues. Therefore, in my opinion, the results in Section 3 and 4 seem more novel compared to the results in Section 2.\\n\\n(c) While the results state fairly precise dynamics of gradient flow, how well would they extend to settings with sampling errors? For example, what about matrix completion? Would your claim regarding \\\"imbalanced initializations\\\" still hold?\\n\\nWriting\\n\\nOverall, this paper is well-written and easy to follow. Although the paper would be easier to read if it is less dense. I do not quite understand the claims in Figure 2. You explained in Section 4 that you need $K \\\\le m + n$ for identity initializations and Proposition 5, but here you say that k ranges from 50 to 200? Please clarify.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Some comparisons may be presented in a more detailed and precise way\", \"review\": \"This paper studies the implicit acceleration of gradient flow for training a two-layer linear model. Compared with the one-layer linear model, the authors show that gradient flow over an overparameterized two-layer linear model may achieve a faster convergence rate, given a nice data spectrum and proper initialization. Moreover, the authors investigate the convergence of gradient flow with an arbitrary initialization and show its connection to Riccati differential equations as well as the explicit regularization. Overall the idea is clearly presented, the experimental results also well back up the theory.\\n\\nMy main concern is that it may not be fair to compare the convergence rate in terms of gradient flow. For example, you can simply reparameterize the parameter by X -> 2X, and the acceleration can also be achieved from the perspective of gradient flow. I think in order to fairly compare the convergence between different parameterizations/initializations, gradient descent is a better choice. Back to the example of X->2X, in this case, the smoothness parameter will become larger, finally one can observe that the convergence rate of GD under this parameterization will remain unchanged. \\n\\nRegarding the convergence results, the authors still require strong conditions (parameter and data can be diagonalized simultaneously) on the initialization to prove the convergence of gradient flow. What happens if considering more general assumptions on the initialization, such as the random/orthogonal initialization used in the following papers? The authors may also need to compare the convergence rates of the derived results and those in the following papers.\\n\\n[1] Du, Simon S., and Wei Hu. \\\"Width provably matters in optimization for deep linear neural networks.\\\" arXiv preprint arXiv:1901.08572 (2019). \\n\\n[2] Wu, Lei, Qingcan Wang, and Chao Ma. \\\"Global convergence of gradient descent for deep linear residual networks.\\\" Advances in Neural Information Processing Systems. 2019. \\n\\n[3] Zou, Difan, Philip M. Long, and Quanquan Gu. \\\"On the Global Convergence of Training Deep Linear ResNets\\\". International Conference on Learning Representations.\\n\\n[4] Hu, Wei, Lechao Xiao, and Jeffrey Pennington. \\\"Provable Benefit of Orthogonal Initialization in Optimizing Deep Linear Networks.\\\" International Conference on Learning Representations.\\n\\nThe regularization term in Eq. (20) seems pretty similar to the commonly-used regularization in solving low-rank matrix problems [5,6] for balancing (but they use 1/16 weight), could you please briefly discuss their connections?\\n\\n[5] Tu, Stephen, et al. \\\"Low-rank solutions of linear matrix equations via Procrustes flow.\\\" International Conference on Machine Learning. PMLR, 2016.\\n\\n[6] Wang, Lingxiao, Xiao Zhang, and Quanquan Gu. \\\"A unified computational and statistical framework for nonconvex low-rank matrix estimation.\\\" Artificial Intelligence and Statistics. 2017.\\n\\nIs the overparameterization necessary? From the presented theorems, I do not see whether the derived results have a dependency on the dimension of the matrix U or V. The authors may clearly specify why one needs to consider overparameterization linear models. For example, assume the data matrix has rank r, can the theory in this paper still hold if set the dimension of U or V as d*r?\\n\\n=========== after reading rebuttal ===============\\n\\nI do not agree with the author's response to my first comment. Considering parameterization $X = 2U$ (here I use the notation U to avoid the confusion), then the loss function becomes $L(U) = \\\\frac{1}{2}\\\\\\\\|Y-2U\\\\\\\\| _2^2$ and the gradient flow with respect to $Z$ writes $\\\\dot {U}= -2(2Z-Y)$. Then we have $\\\\dot {X} = 2\\\\dot{U} = -4(2U-Y) = -4(X-Y)$, which gives a rate $O(e^{-4t})$, which is faster than the $O(e^{-t})$ rate achieved without using this parameterization. Therefore, comparing the convergence rate in terms of gradient flow may still not be fair and valid. \\n\\nBased on this issue I would like to keep my score.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting Approach to Understanding Implicit Acceleration\", \"review\": \"1. Paper Summary\\n\\nThis work analyzes the implicit acceleration of gradient flow for over-parameterized 2 layer networks (i.e. 1 hidden layer networks) used for matrix factorization. By presenting a novel analysis connecting the gradient flow to Riccati type differential equations, this work demonstrates that imbalanced initializations can lead to acceleration. The authors present convergence rates for symmetric and asymmetric matrix factorization under both spectral and non-spectral initializations. The convergence rates for these settings are indeed faster than those in the linear model. The authors lastly provide empirical results to support their theory. \\n \\n\\n######################################################################\\n\\n2. Strengths\\n\\n2.1. The connection between gradient flow and Riccati type differential equations is novel to the best of my knowledge and provides a simpler and clearer means of understanding implicit acceleration in over-parameterized models than prior works. \\n\\n2.2. The paper is written clearly and is easy to follow. The authors present examples of acceleration under the simpler setting of spectral initialization before extending to the more nontrivial case of non-spectral initializations. \\n \\n\\n######################################################################\\n\\n\\n3. Limitations\\n\\n3.1. I believe the authors are missing some references to related work. In particular, below are some related works:\\n\\n(1) https://arxiv.org/pdf/1810.02281.pdf - This work extends the notion of zero balancedness considered in the work by Arora et al. 2018 to the case of approximate balancedness. The analysis presented in this work is for deep linear networks (more than 2 layers) and for gradient descent. Technically, I believe this work does fall under the imbalanced case and also yields a fast convergence rate. \\n\\n(2) https://arxiv.org/abs/2003.06340 - This work analyzes spectral initialization under gradient descent in deep linear networks (of arbitrary layer structure). In particular, this work also demonstrates linear (fast) convergence for gradient descent under spectral initialization (Proposition 2). \\n\\nI believe it is important for the authors to position their work with respect to the above works, but I do not feel that missing these references diminishes the novelty of the submitted work. \\n \\n3.2. While the connection to Riccati type differential equations is interesting, it would be nice if the authors could discuss how similar analysis could be used for deep linear networks (at the moment there does not appear to be an obvious extension). \\n\\n3.3. (Minor) The authors briefly mention that discrete time convergence rates can be derived from continuous time counterparts using symplectic integrators in the introduction, but it would be nice if there were a more rigorous connection between the related work and the rates presented in the submitted work. \\n\\n\\n######################################################################\\n\\n4. Score and Rationale\\n\\nMy recommendation is to accept the paper. I believe the main strength of the paper was in providing a well-presented, rigorous, and novel analysis for understanding acceleration in over-parameterized matrix factorization. The connection to Riccati type differential equations and identifying dynamical invariants presents an interesting alternative means of gaining intuition around implicit acceleration in over-parameterized networks. \\n\\n\\n######################################################################\\n\\n5. Comments\\n\\n5.1. I believe the Y in equation 3 should be m x m instead of m x n\\u00a0for the symmetric case. \\n\\n5.2. I feel that the notation could be made a bit easier to follow in Section 3 & 4. In particular, I believe bar(U), bar(V) are overloaded to represent diagonal matrices in definition 3, but these quantities are assumed to be non-diagonal for the remainder of the work. As this is an important point, I feel that it could be emphasized a bit more. \\n\\n5.3. I feel that there could be more intermittent references to prior results throughout the work. For example, the symmetric matrix factorization problem is discussed extensively in https://papers.nips.cc/paper/7195-implicit-regularization-in-matrix-factorization.pdf. Similarly, invariance and convergence rate for gradient descent and under spectral initialization is discussed in https://arxiv.org/abs/2003.06340. \\n\\n5.4. There is an important distinction between the matrix factorization problems considered in this work and prior work. Namely, the matrix Y has fully observed entries whereas in some prior works, Y is not completely observed. I think the analysis may get a bit more tricky for the case of unobserved entries, but the authors could maybe point this out.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}",
"{\"title\": \"Review for 'Implicit Acceleration of Gradient Flow in Overparameterized Linear Models'\", \"review\": [\"This paper considers the gradient flow dynamic for two-player linear neural networks. In detail, it studies the implicit acceleration of gradient flow brought by overparameterization and shows the reason for implicit acceleration is the existence of conservation law. It studies the convergence for gradient flow under both balanced or imbalanced linear networks, and with spectral or non-spectral initialization. Compared with previous work, this work is the first to provide an explicit characterization of the gradient flow with respect to their eigenvalues. Experiment results suggest that such an implicit acceleration indeed exists.\", \"Here are my detailed comments.\", \"Page 2, (2): why call (2) \\u2018a symmetric one-layer linear model\\u2019? Does that suggest $m = n$? The same issue holds for (3). Is $Y$ in (3) the same as that in (2)?\", \"An interesting observation is that under the spectral initialization, for the symmetric case, the convergence rate of the eigenvalues does not depend on the initial value of $X_0$ (it is $e^{-4t|\\\\sigma_i|}$). However, for the asymmetric case, the convergence rate is $e^{-t\\\\sqrt{4\\\\sigma_i^2 + \\\\lambda_0^2}}$, which explicitly depends on the initial matrix $X_0$. Can the authors make more comments about that?\", \"The \\u2018acceleration\\u2019 compared with original gradient flow over $X$ suggests that by a matrix factorization, the convergence of eigenvalues varies may be accelerated according to the eigenvalues over the data matrix $Y$. Is it true that in the worst case, say for some $i$, $\\\\sigma_i \\\\approx 0$, then the convergence speed of $\\\\sigma_i(t)$ will decrease to 0? In that case, can we still call it \\u2018acceleration\\u2019?\", \"In Proposition 5, the authors build the convergence results for the case $\\\\Lambda_{Q_0} = \\\\lambda_0 I_k$ for the general non-spectral initialization case. However, I hardly can find a non-spectral initialization case satisfying such a condition. Can the authors provide some examples?\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}"
]
} |
UmrVpylRExB | Dual-Tree Wavelet Packet CNNs for Image Classification | [
"Hubert Leterme",
"Kévin Polisano",
"Valérie Perrier",
"Karteek Alahari"
] | In this paper, we target an important issue of deep convolutional neural networks (CNNs) — the lack of a mathematical understanding of their properties. We present an explicit formalism that is motivated by the similarities between trained CNN kernels and oriented Gabor filters for addressing this problem. The core idea is to constrain the behavior of convolutional layers by splitting them into a succession of wavelet packet decompositions, which are modulated by freely-trained mixture weights. We evaluate our approach with three variants of wavelet decompositions with the AlexNet architecture for image classification as an example. The first variant relies on the separable wavelet packet transform while the other two implement the 2D dual-tree real and complex wavelet packet transforms, taking advantage of their feature extraction properties such as directional selectivity and shift invariance. Our experiments show that we achieve the accuracy rate of standard AlexNet, but with a significantly lower number of parameters, and an interpretation of the network that is grounded in mathematical theory. | [
"convolutional neural networks",
"wavelet packet transform",
"dual-tree wavelet packet transform",
"image classification",
"deep learning",
"image processing"
] | Reject | https://openreview.net/pdf?id=UmrVpylRExB | https://openreview.net/forum?id=UmrVpylRExB | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"--M-JlYRnKX",
"S3MbQQC36uA",
"I4bD8uezydJ",
"bg37UP3JurW",
"ak4dnafIwFg",
"9UQ2FeGjM3B",
"3fVsXsxG3nH",
"mN8Xe6jQuhZ",
"VhZTM_z4jnO",
"ol7JfpTad1g",
"jDfpY8KDRcn",
"6r8MdHNbEED"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040392616,
1606251964471,
1605911362258,
1605911235300,
1605910970794,
1605910782188,
1605910571556,
1605910227898,
1604074817368,
1603988665558,
1603903935032,
1603804528029
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3573/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3573/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3573/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3573/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3573/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3573/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3573/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3573/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3573/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3573/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3573/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": \"The paper is motivated by the observed similarity between learned filters at the low layers of a convolutional neural network and oriented Gabor filters. It proposes to replace the lower layers with dual tree wavelet packet transforms, which yield fixed oriented frequency-selective features. Instead of learning filters from scratch, it proposes to learn only a scalar importance for each of these features, reducing the number of learned parameters. Experiments with the AlexNet architecture on ImageNet indicate that this modification does not reduce performance, but does significantly reduce the number of parameters. The paper argues that this modification also improves the interpretability (and in the case of complex dual tree wavelets, potentially the invariance properties) of the low-level features.\", \"pros_and_cons\": \"[+/-] As the paper clearly argues, replacing learned filters with wavelet packet transforms improves the interpretability of the low layers of a convolutional network. While other works have pursued similar ideas, limiting the conceptual novelty, at a technical level this is the first work to use the dual tree complex wavelet transform for this purpose. The DT-CWT may have mathematical advantages. The paper and rebuttal argue that it it is conceptually cleaner (\\u201csparser\\u201d, since the transform is generated by a single filter) although there may not be a greater reduction in the number of trainable parameters.\\n\\n[+] The additional per-channel weights are redundant in terms of the representation capacity of the network, but may effectively introduce sparse regularization (see work on the \\u201cHadamard parameterization\\u201d in implicit sparse regularization), allowing the network to select relevant wavelet features.\\n\\n[+] The paper is well-organized and cleanly written. The authors revision has done a good job of addressing all clarity concerns of reviewers. \\n\\n[-] Several reviewers raised concerns about the limited scope of the experiments: the paper only replaces a single layer of a particular architecture (AlexNet) and evaluates on one particular dataset (ImageNet). \\n\\n[-] The main proposed benefit of this modification is in the interpretability of the network and its potential amenability to mathematical analysis. This claim would be stronger if the paper either 1. showed the benefit by exhibiting some rudimentary mathematical theory for this network or 2. used this idea to demonstrate networks that are significantly more interpretable, say by replacing all learned convolutional layers with DT-CWPT. \\n\\nThe paper\\u2019s reviews were split. All reviewers appreciate the paper\\u2019s clean exposition of a reasonable idea, and note the novelty of using dual tree wavelets in this context. However, reviewers express concerns about the paper\\u2019s significance: it could do more to show how replacing the lowest layer with DT-CWPT yields new insights, and do more to demonstrate (both rhetorically and experimentally) the generality of its ideas. Based on the bulk of the reviews, as written the paper falls slightly below the threshold for acceptance.\"}",
"{\"title\": \"Revised version of the paper\", \"comment\": [\"Dear reviewers,\", \"We have submitted a revised version of our paper, taking into account your comments and suggestions. The changes appear in blue to facilitate review. Here are the main differences with the first version.\", \"Introduction: the main goals have been clarified and the advantages over related works have been highlighted.\", \"Composition of convolution operators: we have renamed our result \\\"Proposition\\\" and clarified the differences with the well-known property of successive convolutions. Besides, we provided clarifications about its validity scope.\", \"Additional experiments:\", \"we trained our models on VOC and COCO datasets (multilabel classification);\", \"we tested the robustness of our models with respect to small shifts.\", \"Conclusion and future works: we moved here the paragraph about optimizing the number of trainable parameters. We also rephrased some sentences to highlight our main contributions.\"]}",
"{\"title\": \"Response to Reviewer 1 (further questions)\", \"comment\": \"> In figure 3, only filters of the red channel (i.e., W[0,\\u22c5]) are shown. But readers could also be interested in W[i,\\u22c5] for i=1,2. Maybe a colored visualization of the filters is better.\\n\\nWe chose to show only one channel out of three because they really look similar. Following your suggestion, we tried to display a colored visualization, but it becomes harder to distinguish details, especially on printed paper. Hence, we decided to keep a single-channel display in our submission. However, since we intend to publish our code on Github, we will also provide a notebook where it will be possible to display a colored version of the filters.\\n\\n> Still figure 3, how exactly is the filter cropping done, and how can we be convinced that the cropped parts of the matrices are negligible, e.g., with small Lp norms?\\n\\nWe performed a center crop to match AlexNet\\u2019s kernel size. As mentioned in the revised version, it turns out that in all our models, between 97% and 99% of the kernels\\u2019 energy (i.e., the squared L2 norm) is concentrated in these cropped regions.\\n\\n> Appendix, algorithm 1: Does \\u27e8Wr\\u27e9 mean the shape of the tensor Wr?\\n\\nYes, this is it. We fixed this omission in the revised version.\\n\\n> Section 3.1: The number N\\u2032 in the dimension of D becomes 56 in Y. Does the full-convolution (as defined in section 2) guarantee N\\u2032=56, or is there a cropping procedure?\\n\\nThe size of the output feature maps depends on the stride, or downsampling factor (it is roughly divided by 2 after each level of WPT decomposition), but also on how much we extend the image beyond its edges (padding). We have added this detail to the paper.\"}",
"{\"title\": \"Response to Reviewer 1 (main concerns)\", \"comment\": \"> [...] what is the benefit of replacing the first layer of AlexNet with pre-designed non-trainable convolutions? [...]\\n\\n> [...] the paper refers to some previous work on replacing freely-trained CNN layers with more constrained structures. What is the advantage of the proposed work over the existing ones?\\n\\nOur main insight is to describe the observed behavior of CNNs with a sparse model, and explain their predictive power using the feature extraction properties of the WPT or DT-CWPT (see below). The goal for the models proposed by Oyallon et al. (2018) is very different. They do not attempt to imitate the behavior of existing models \\u2013 linear convolutions are replaced by a non-linear transform \\u2013 but instead aim to improve inference speed and memory consumption.\\n\\nOther works like Sarwar et al. (2017) and Ulicny et al. (2019) introduce Gabor filters and discrete cosine transforms, respectively, into convolutional layers. Similar to these transforms, our wavelet packet filters are well localized in the frequency domain and share a subsampling factor over the output feature maps. A major advantage with our approach is sparsity: a single vector (called conjugate mirror filter, or CMF) is sufficient to characterize the whole process. In addition, DT-CWPT extracts oriented and shift-invariant features with minimal redundancy. These properties therefore provide a sparser description of the observed behavior of convolutional layers. This is a step toward a more complete description of CNNs by using a small number of arbitrary parameters.\\n\\nTo provide additional evidence, we are in the process of evaluating the robustness of our models with respect to sample shifts. We will also test out-of-distribution transfer by using our pre-trained models on other datasets. This is ongoing, and we will provide an update before Nov 24.\\n\\n> [...] AlexNet is not SOTA nor near SOTA, and the adaption of such a benchmark does not seem to provide persuasive results.\\n\\nSince our goal was not to merely improve CNN performance, we did not work on SOTA networks. Instead, we focused on the first layer of AlexNet as a proof of concept, because introducing wavelet packet transform into it is facilitated by its large 11x11 kernels, and convolution operations performed with a downsampling factor of 4. This allows two levels of wavelet decomposition without any additional transformation. However, the oscillating patterns in trained convolutional kernels do not restrict to the first layer, nor are they specific to AlexNet. We strongly believe that our model could be adapted to deeper layers and more recent architectures. The transposition is not trivial though, because the stride is typically set to 2 in most convolutional layers (vs 4 in AlexNet). Wavelet decomposition must be handled carefully to match the original layer\\u2019s hyperparameters. This will be tackled as part of future work.\\n\\n> In the proposed networks, only the first layer of AlexNet is replaced, but most of the parameters are in the deeper layers, especially the FC layers. It is hard to see the advantage of such a reduction of trainable parameters.\\n\\nOur method reduces the total number of parameters \\u2013 FC layers are outside the scope of this work. The primary goal of our work is not to reduce inference time or memory consumption, even though it is a step in that direction. Instead, we seek to describe the behavior of convolutional layers with a sparse model, taking advantage of the feature extraction properties of the dual-tree wavelet packet transform. We may not have made this point very clear in the initial submission, and clarified it in the updated version.\\n\\n> Only one dataset is used, and we do not know how the filters would be affected when applied to a different dataset.\\n\\nAs mentioned above, we are in the process of training our models on other datasets and will provide an update before Nov 24. We will compare the accuracy scores of models trained from scratch with those of ImageNet-pre-trained models after finetuning.\\n\\n> Do other choices of CMFs generate different results?\\n\\nSo far we trained our models with a Q-shift filter of length 10, but other filters may provide better extraction properties due to a higher frequency localization or number of vanishing moments. One way to address this question is to let the network learn the optimal CMF with a proper regularizer. This will be addressed in future work.\"}",
"{\"title\": \"Response to Reviewer 3\", \"comment\": \"> It would be interesting to apply the method to multiple layers of the network or in the extreme case, a network based solely on the WPT.\\n\\n> [...] not applied to other popular and performant CNN architectures, e.g. ResNet (He et al. 2015), and is only applied to the relatively older AlexNet model.\\n\\nWe focused on the first layer of AlexNet, because introducing wavelet packet transform into it is facilitated by its large 11x11 kernels, and convolution operations performed with a downsampling factor of 4. This allows two levels of wavelet decomposition without any additional transformation. However, the oscillating patterns in trained convolutional kernels do not restrict to the first layer, nor are they specific to AlexNet. We strongly believe that our model could be adapted to deeper layers and more recent architectures. Eventually, replacing all convolutional layers by wavelet decompositions and 1x1 convolutions would indeed provide a clearer description of the whole network. The transposition is not trivial though, because the stride is typically set to 2 in most convolutional layers (vs 4 in AlexNet). Wavelet decomposition must be handled carefully to match the original layer\\u2019s hyperparameters. This will be tackled as part of future work.\\n\\n> To be more convincing, the method must be demonstrated on more datasets or tasks. For example, other image recognition datasets or other tasks (such as ASR).\\n\\nWe are in the process of training our models on other datasets and will provide an update before Nov 24. We continue to focus on computer vision tasks, and compare the accuracy scores of models trained from scratch with those of ImageNet-pre-trained models after finetuning.\\n\\n> The authors show the proposed method reduces the number of learned parameters, however does the method reduce the total number of parameters? Is the inference time of the network improved/degraded with respect to the CNN baseline?\\n\\nOur method reduces the total number of parameters, but this may not seem large since most of the trainable parameters are in the fully-connected layers (59M vs 2.5M for the convolutional layers). The primary goal of our work is not to reduce inference time or memory consumption, even though it is a step in that direction. Instead, we seek to describe the observed behavior of convolutional layers with a sparse model, using a small number of arbitrary parameters. For this we took advantage of the feature extraction properties of DT-CWPT. We may not have made this point very clear in the initial submission, and clarified it in the updated version.\"}",
"{\"title\": \"Response to Reviewer 2\", \"comment\": \"Thank you for this very positive review. We have indeed tried to make our ideas as clear as possible by using a proper mathematical formalism.\\n\\n> The paper is very clear and concise, but it might be better if it compared across other nets besides AlexNet. [...]\\n\\nThe first layer of AlexNet has a relatively high number of trainable parameters (23,296), due to its large 11x11 kernels. In comparison, the first layer of ResNet, with kernels of size 7x7, has only 9,472 trainable parameters.\\n\\nSo far we focused on AlexNet, because introducing wavelet packet transform is facilitated by its large kernels, and convolution operations performed with a downsampling factor of 4. This allows two levels of wavelet decomposition without any additional transformation. However the oscillating patterns in trained convolutional kernels do not restrict to the first layer, nor are they specific to AlexNet. We strongly believe that our model could be adapted to deeper layers and more recent architectures. The transposition is not trivial though, because the stride is equal to 2 in ResNet (vs 4 in AlexNet). The wavelet decomposition must therefore be handled carefully to match ResNet\\u2019s hyperparameters. This will be tackled as part of future work.\"}",
"{\"title\": \"Response to Reviewer 4\", \"comment\": \"> On the theoretical side, I am not sure that the result on page 6 qualifies as \\u201ctheorem\\u201d \\u2013 the fact that two successive convolutions can be written as another convolution with a wider kernel is well-known in the signal processing literature.\\n\\n> Regarding the theorem, again, this is not a new result, fundamentally, and I think this should be made clear.\\n\\nThe proposition takes advantage of this well-known result indeed, but goes further. It shows that the composition of two CNN-style multi-channel convolution operators can be expressed as a single operator, and provides an explicit formulation of the resulting hyperparameters (i.e., stride, dilation factor and no. of groups describing input-output channel connections) and the weight tensor. While not groundbreaking, this result was needed for analysis \\u2013 section 3.3 shows a visual representation of the resulting kernels; besides, in future work we will quantify their similarities and study how it impacts the network\\u2019s predictive power. Since we could not find it written in this form in the literature, we presented it in our work. Nevertheless, we understand that qualifying this result as \\\"theorem\\\" may be too strong. We have renamed it \\\"proposition\\\" in the updated version of our paper, clarifying that it is not shown as a new result.\\n\\n> I also do not see any evidence that the authors intend to publish the code for this experiment.\\n\\nWe have inadvertently omitted this point; thank you for letting us know. We intend to publish our source code, as mentioned in the updated version.\\n\\n> [...] a potential subtlety here that needs to be addressed, is the behavior at the edge: [...] This potential issue is not discussed in the main text nor in the proof of the theorem.\\n\\nIn this proposition, we implicitly made a hypothesis which is not satisfied when using symmetric padding. Nevertheless, distortion effects only appear at the edges of images; and the property holds everywhere else. We leave the study of the influence of such choice on the network\\u2019s predictive power for future work.\\n\\n> While important, similar results have been established before (by Oyallon et al., 2018, for example), and it is not clear what the wavelet packet approach brings to the table.[...]\\n\\nOur main insight is to describe the observed behavior of CNNs with a sparse model, and explain their predictive power using the feature extraction properties of the WPT or DT-CWPT (see below). The goal for the models proposed by Oyallon et al. (2018) is very different. They do not attempt to imitate the behavior of existing models \\u2013 linear convolutions are replaced by a non-linear transform \\u2013 but instead aim to improve inference speed and memory consumption.\\n\\nOther works like Sarwar et al. (2017) and Ulicny et al. (2019) introduce Gabor filters and discrete cosine transforms, respectively, into convolutional layers. Similar to these transforms, our wavelet packet filters are well localized in the frequency domain and share a subsampling factor over the output feature maps. A major advantage with our approach is sparsity: a single vector (called conjugate mirror filter, or CMF) is sufficient to characterize the whole process. In addition, DT-CWPT extracts oriented and shift-invariant features with minimal redundancy. These properties therefore provide a sparser description of the observed behavior of convolutional layers. This is a step toward a more complete description of CNNs by using a small number of arbitrary parameters.\\n\\n> Section 4.3 [...] Since the outlined scheme is rather simple, why have the authors not tried it? [...] Similarly, the authors speculate that the dual-tree real wavelet packets perform worse than the complex variant due to higher sensitivity to image shifts. Whether this is the case can be easily tested experimentally by shifting the images in the test set.\\n\\nWe did try to train reduced models (by discarding some filters) on a subset of ImageNet, for computational reasons. It turns out that the drop in accuracy remained negligible even when reducing the number of parameters aggressively. While this seems hopeful, the dataset may have been too small to draw any strong conclusions. We thus decided to leave for future work the task of optimizing the number of trainable parameters. For clarity, we now moved this paragraph to a \\u201cFuture work\\u201d section in the revised version.\\n\\nRegarding the hypothesis on shift invariance, we are in the process of evaluating this and will provide an update before Nov 24.\\n\\n> There are also a few small problems in the main text [...].\\n\\nThank you for pointing these out. We have fixed them in the updated version.\"}",
"{\"title\": \"Response to all reviewers\", \"comment\": \"We thank the reviewers for their positive feedback, constructive comments, and suggestions. We answer the questions raised by each reviewer individually, and welcome further discussion.\"}",
"{\"title\": \"Interesting results but more analysis is needed\", \"review\": \"This paper describes a variation of the popular AlexNet architecture, where the first convolutional layer is replaced with a wavelet packet decomposition. This is motivated by the fact that the first-layer convolutional kernels look very similar to wavelet filters in that they mostly extract oriented edges and smooth gradients. The wavelet packet coefficients are then weighted using a single mixing layer implemented as a 1\\u00d71 convolution. The resulting module (wavelet packet decomposition plus 1\\u00d71 convolution) has a smaller number of learnable parameters, but achieves close to the same performance as the standard AlexNet on the ImageNet ILSVRC2012 dataset.\\n\\nThe paper is well laid-out and the important technical concepts are explained in a clear and concise manner. The various wavelet packet modules used to replace the first layer of AlexNet are well described and their differences clear. The same goes for the experimental section and the evaluation of the results. On the theoretical side, I am not sure that the result on page 6 qualifies as \\u201ctheorem\\u201d \\u2013 the fact that two successive convolutions can be written as another convolution with a wider kernel is well-known in the signal processing literature. While the experimental results are impressive, more could also be made out of the analysis. I also do not see any evidence that the authors intend to publish the code for this experiment. Despite these issues, I think the results are interesting enough that the paper be accepted for publication in the proceedings.\\n\\nAs discussed in the paper, there is a great need for theory explaining the performance of deep neural networks. This work is a step in that direction, reducing the number of learnable components and replacing them with fixed representations (here: wavelet packet decompositions) and achieving the same performance.\\n\\nRegarding the theorem, again, this is not a new result, fundamentally, and I think this should be made clear. That being said, a potential subtlety here that needs to be addressed, is the behavior at the edge: for large enough kernels, unless the boundary extension is periodic, the composition of two convolution kernels is not necessarily another convolution. This potential issue is not discussed in the main text nor in the proof of the theorem.\\n\\nCurrently, the main push of the analysis is that the first-layer kernels can be replaced by similar-looking wavelet packet kernels. While important, similar results have been established before (by Oyallon et al., 2018, for example), and it is not clear what the wavelet packet approach brings to the table. Do these filters have certain properties that make them easier to analyze? Is the resulting reduction in number of learnable parameters greater?\\n\\nSection 4.3 is also a bit odd. It is mostly speculation about how the number of parameters could be reduced further. Since the outlined scheme is rather simple, why have the authors not tried it? Otherwise, I do not quite understand why this section is included. Similarly, the authors speculate that the dual-tree real wavelet packets perform worse than the complex variant due to higher sensitivity to image shifts. Whether this is the case can be easily tested experimentally by shifting the images in the test set. Have the authors explored validating the hypothesis in this manner?\", \"there_are_also_a_few_small_problems_in_the_main_text\": \"\\u2013 p. 3, in Definition 1, it is not clear how the operator is defined. A reference to eq. (3) should suffice.\\n\\u2013 p. 4, \\u201cevery input channels\\u201d should be \\u201cevery input channel\\u201d.\\n\\u2013 p. 7, \\u201cpredicting power\\u201d should probably be \\u201cpredictive power\\u201d here and throughout.\\n\\u2013 p. 8, \\u201cextract lesser information\\u201d should be \\u201cextract less information\\u201d.\\n\\u2013 p. 8, \\u201cdoes not restricts\\u201d should be \\u201cdoes not restrict\\u201d.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Succinct Paper with a Good Idea\", \"review\": \"Summary:\\nThe authors propose a scheme for combining the trusted mathematical properties of Dual Tree Wavelet Packets with CNN-style feature extraction. Specifically, they learn simple functions of wavelet kernel outputs, rather than learning kernels themselves from scratch. The fundamental gains are a significant drop in the number of parameters, while retaining feature expressiveness and intuition.\", \"clarity\": \"The paper clarity is significantly above average.\", \"quality\": \"The paper is very clear and concise, but it might be better if it compared across other nets besides AlexNet. For example, how does it compare to nets with feature extractors in the same order of magnitude (3k parameters), vs much bigger (alexnet: 25k).\\n\\nOriginality/Significance:\\nI am not extremely familiar with literature related to this idea. However, I think constraining CNN filters in this way is an important area of research.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"The authors propose a nice approach incorporating the wavelet packet transform into CNN, but the limited experiments fail to demonstrate the generalizability of the method.\", \"review\": \"Summary:\\nThis paper proposes a modification to the prevalent CNN architecture that leverages filter banks based on wavelets with the motivation to reduce trainable parameters and improve interpretability. More specifically, the first CNN layer of the AlexNet architecture is replaced with a wavelet packet transform (WPT). The authors show the wavelet packet transform can be written as a series of convolution operation and visually compare the filters of the trained AlexNet to the WPT filters. Experiments on ImageNet show the proposed method can match the performance of AlexNet on ImageNet with fewer parameters.\", \"pros\": \"This work tackles in a focused way the interpretability and formalism of deep neural networks, specifically CNN, by grounding the first layers of the network in the frame work of the wavelet transform.\\n\\nThe similarities shown in Figure 1 between trained AlexNet kernels and kernels from the proposed method are interesting\\n\\nThe reduction in learned parameters for the network under the proposed approach is a nice feature, and shows that stronger priors can guide learning.\", \"cons\": \"A crucial limitation of the proposed approach seems to be its application only to the first layer of the CNN. It would be interesting to apply the method to multiple layers of the network or in the extreme case, a network based solely on the WPT.\\n\\nAnother limitation of this work is that the proposed method is not applied to other popular and performant CNN architectures, e.g. ResNet (He et al. 2015), and is only applied to the relatively older AlexNet model.\\n\\nTo be more convincing, the method must be demonstrated on more datasets or tasks. For example, other image recognition datasets or other tasks (such as ASR). Results on one CNN architecture on one dataset does not fully demonstrate the generalizability of the method.\\n\\nThe authors show the proposed method reduces the number of learned parameters, however does the method reduce the total number of parameters? Is the inference time of the network improved/degraded with respect to the CNN baseline?\", \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"Replacing the first layer of AlexNet with wavelet packet transforms to reduce model parameters. Interesting but lack of explanations and comparisons.\", \"review\": \"To improve the understanding of CNNs, this paper proposes a method to constrain the behavior of AlexNet by replacing its first layer with a module based on wavelet packet decompositions. Three variants of wavelet decompositions are evaluated, including the separable wavelet packet transform and the 2D dual-tree real and complex wavelet packet transforms. A visualization algorithm is implemented to compare the extracted features with that of AlexNet. Results on an image classification task show that the accuracy of standard AlexNet can be achieved with less trainable parameters.\", \"pros\": [\"It is interesting to see the combined convolutional kernels, as shown in figure 3, resembles that of AlexNet.\", \"The paper presents some intuitive explanations on some aspects of the results, including (1) why and how the proposed DT-CWPT module can reduce the number of trainable parameters, with the accuracy rate maintained compared with standard AlexNet, and (2) why the three variants differ in accuracy.\"], \"cons\": \"- The insight provided by this paper is vague, and it is not so clear how this can improve our understanding of CNNs (which was claimed as one of the contributions). Furthermore, what is the benefit of replacing the first layer of AlexNet with pre-designed non-trainable convolutions? Do we have better adversarial robustness, out-of-distribution transfer, etc.? \\n- The experimental results of the paper do not seem very convincing. \\n(1) Despite the reasons stated in the first paragraph of page 2, AlexNet is not SOTA nor near SOTA, and the adaption of such a benchmark does not seem to provide persuasive results. \\n(2) In the proposed networks, only the first layer of AlexNet is replaced, but most of the parameters are in the deeper layers, especially the FC layers. It is hard to see the advantage of such a reduction of trainable parameters. \\n- Lack of experiments. \\n(1) Only one dataset is used, and we do not know how the filters would be affected when applied to a different dataset. \\n(2) Do other choices of CMFs generate different results? \\n(3) In the introduction, the paper refers to some previous work on replacing freely-trained CNN layers with more constrained structures. What is the advantage of the proposed work over the existing ones?\", \"further_questions\": [\"In figure 3, only filters of the red channel (i.e., $W[0,\\\\cdot]$) are shown. But readers could also be interested in $W[i,\\\\cdot]$ for $i=1,2$. Maybe a colored visualization of the filters is better.\", \"Still figure 3, how exactly is the filter cropping done, and how can we be convinced that the cropped parts of the matrices are negligible, e.g., with small Lp norms?\", \"Appendix, algorithm 1: Does $\\\\langle W_r\\\\rangle$ mean the shape of the tensor $W_r$?\", \"Section 3.1: The number $N'$ in the dimension of $D$ becomes 56 in $Y$. Does the full-convolution (as defined in section 2) guarantee $N'=56$, or is there a cropping procedure?\"], \"rating\": \"4: Ok but not good enough - rejection\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
WPO0vDYLXem | Hyperparameter Transfer Across Developer Adjustments | [
"Danny Stoll",
"Jörg K.H. Franke",
"Diane Wagner",
"Simon Selg",
"Frank Hutter"
] | After developer adjustments to a machine learning (ML) algorithm, how can the results of an old hyperparameter optimization (HPO) automatically be used to speedup a new HPO? This question poses a challenging problem, as developer adjustments can change which hyperparameter settings perform well, or even the hyperparameter search space itself. While many approaches exist that leverage knowledge obtained on previous tasks, so far, knowledge from previous development steps remains entirely untapped. In this work, we remedy this situation and propose a new research framework: hyperparameter transfer across adjustments (HT-AA). To lay a solid foundation for this research framework, we provide four simple HT-AA baseline algorithms and eight benchmarks
changing various aspects of ML algorithms, their hyperparameter search spaces, and the neural architectures used. The best baseline, on average and depending on the budgets for the old and new HPO, reaches a given performance 1.2-3.6x faster than a prominent HPO algorithm without transfer. As HPO is a crucial step in ML development but requires extensive computational resources, this speedup would lead to faster development cycles, lower costs, and reduced environmental impacts. To make these benefits available to ML developers off-the-shelf and to facilitate future research on HT-AA, we provide python packages for our baselines and benchmarks. | [
"Meta Learning",
"Hyperparameter Optimization",
"Transfer Learning"
] | Reject | https://openreview.net/pdf?id=WPO0vDYLXem | https://openreview.net/forum?id=WPO0vDYLXem | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"TA7hSp5sEb_",
"Jjb3tdGr2sV",
"HEaCtKK0OZG",
"_Aujm7Xunbp",
"GFNirA-GMNP",
"2y2NSo1hl6",
"6H4FaCc8ozb",
"kKh9Lx0o0_q",
"a9QwURyFqST",
"ViaG_ysLD-y",
"ws7UWASYsUu",
"y2eiE_6aNNq",
"d_KfjzqNwcN",
"y2uiQnLUfvn",
"_H1rl53ao9f",
"1R0_Z-XNzSF",
"vcFDS9191YH",
"7SfnZKZF6rq",
"kpq4q43rjnj",
"OPuFsiAXKar",
"GPNWkoPdV1",
"X79cX66p_X",
"oCmcJqC3rK",
"OKYKA1MoYap",
"x4ucCPwLms",
"DqhIHj9Qt9",
"o8JKdk9Y_U",
"MR_WqhSu4k"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040358189,
1606296651020,
1606289107695,
1606238692271,
1606238329254,
1606178389140,
1606174013139,
1606163538367,
1606140738662,
1606140644037,
1606136437345,
1606123460927,
1606123205466,
1605863157127,
1605863077186,
1605862914233,
1605862753754,
1605862533026,
1605862446604,
1605862106608,
1605862005354,
1605861811959,
1605861791236,
1605861607475,
1603914942804,
1603846480084,
1603701422276,
1603542504021
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3570/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3570/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3570/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3570/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3570/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3570/AnonReviewer3"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Reject\", \"comment\": [\"The paper has been actively discussed, both during and after the rebuttal phase. I enjoyed, and I am thankful for, the active communication that took place between the authors and the reviewers.\", \"On the one hand, the reviewers agreed on several pros of the paper, e.g.,\", \"Clear, well presented manuscript\", \"The presentation of practically-relevant setting\", \"A work that fosters reproducible research (both BO data and algorithms are made available)\", \"Careful experiments\", \"On the other hand, several important weaknesses were also outlined, e.g.,\", \"_Novelty_: While the authors claim they \\u201cintroduce a practically relevant and fundamentally novel research problem\\u201d, existing commercial HPO solutions already mention, and propose solutions for, the very same problem, e.g., [AWS](https://docs.aws.amazon.com/sagemaker/latest/dg/automatic-model-tuning-warm-start.html) (section \\u201cTypes of Warm Start Tuning Jobs\\u201d) and [Google cloud](https://cloud.google.com/blog/products/gcp/hyperparameter-tuning-on-google-cloud-platform-is-now-faster-and-smarter) (section \\u201cLearning from previous trials\\u201d). The reviewers all agreed on the fact that this down-weights the novelty aspect (claimed many times in the rebuttal and the manuscript): The paper formalizes an already existing framework rather than introducing it.\", \"In the light of the weakened \\\"novelty\\\" contribution (see above), the reviewers regretted the absence of a novel transfer method _tailored to HT-AA_, which would have certainly strengthened the submission.\", \"_\\u201cDynamic range\\u201d of the benchmark_: It is difficult to evaluate the capacity of the benchmark to discriminate between different approaches (e.g., see new Fig. 3 showing the violin plot with all three methods for transfer, as suggested by Reviewer 1: the improvements over \\\"best first\\\" seem marginal at best). To better understand the benchmark, it would be nice to illustrate its \\u201cdynamic range\\u201d by exhibiting a more powerful method that would substantially improve over \\u201cbest first\\u201d.\", \"As illustrated by its scores, the paper is extremely borderline. Given the mixed perspectives of pros and cons, we decided with the reviewers to recommend the rejection of the paper.\"]}",
"{\"title\": \"Improved Explanation and Added Pseudocode\", \"comment\": \"Thanks a lot for reading through our response in detail, seeing value in our framework, calling Transfer GP a good and valuable baseline, and finally, for increasing your score.\\n\\nWe have improved the explanation of Transfer GP along with other improvements to the approaches section, and added TPE (and Transfer TPE) back in our evaluation on the request of another reviewer. We also added Pseudocode for Transfer GP/TPE in the main paper.\\n\\nPlease let us know if you would like further details on Transfer GP, or have any additional suggestions for improving our paper.\"}",
"{\"title\": \"Performed Revision\", \"comment\": \"Since we did not hear back anymore in the limited time, we went ahead and implemented the change we proposed below (https://openreview.net/forum?id=WPO0vDYLXem¬eId=6H4FaCc8ozb). If you have any further suggestions, we would be happy to add additional discussion for the final version of the paper.\", \"regarding_the_change\": \"We now provide a detailed discussion on the differences and contributions of HT-AA compared to existing work on hyperparameter transfer already in Section 2. Thank you for your suggestion, we think this earlier relation makes our paper stronger.\"}",
"{\"title\": \"Paper Revisions After Replies\", \"comment\": [\"We performed the following major revisions in response to the replies to our initial response:\", \"Based on a reviewer request, we added the TPE evaluation back in so we now look at two sets of baselines and transfer approaches.\", \"We now provide a detailed discussion on the differences and contributions of HT-AA compared to existing work on hyperparameter transfer already in Section 2.\", \"We improved the structure and explanations in Section 3 (Baseline Algorithms for HT-AA), especially for Transfer GP/TPE, and added Pseudocode for Transfer GP/TPE.\"], \"we_performed_the_following_minor_revisions_in_response_to_the_replies_to_our_initial_response\": [\"In some cases, we now show more than two approaches in a single plot.\", \"We improved the exposition for the standardized improvement analysis (Appendix D).\", \"We added per-benchmark plots for the standardized improvement analysis (Appendix D).\"]}",
"{\"title\": \"Further Revisions (Major and Minor)\", \"comment\": \"Thank you for reading our rebuttal very carefully and for responding quickly with another round of helpful comments.\\n\\n\\n9. \\u201cIt is nice that you replaced TPEs with the more widely adopted (or state of the art?) GPs, but merely replacing one baseline by the another does not qualify as \\\"increasing the number of baselines\\\".\\u201d\\n\\n--> We agree that the results we had for TPE still remain useful and have now added the evaluation with TPE and T2PE into Appendix B, Appendix C, and Appendix D, and have included them as a second row in Figure 3 and Table 1. We hope that this addresses your concern.\\n\\n\\n10. \\u201cI am not convinced by your argument on surrogate benchmark usage. I know that they only represent half of the benchmarks, yet the point is that they might present a risk of returning a false result. It doesn't matter if a benchmark suite is more widely available due to lower computing costs if it is at risk of returning incorrect results. A couple of non-simulated experiments could help persuade a reader that this is not the case, and the computing budget of training SVMs of low and mid-sized datasets is far from being prohibitive.\\u201d\\n\\n--> We hear your concerns. While we would like to mention that we don\\u2019t see surrogates as returning false results but rather as defining a slightly different blackbox function that still shares many properties with the non-surrogate version (much more so than other blackbox functions popular in HPO, like Branin), we do agree that adding some experiments with real benchmarks will increase trust in our empirical results. We are happy to replace the SVM and XGB surrogate benchmarks with similar SVM and XGB benchmarks that do not rely on surrogates for the final version of the paper. In order to keep compute requirements low (also for future use of the benchmarks), we would reduce the number of tasks from 10 to 3 and the number of random seeds from 25 to 15. Would this fix your concerns? Or would you rather have us *add* these non-surrogate benchmarks, rather than replacing the surrogate versions? Either would be fine for us.\\n\\n\\n11. \\u201cOn speedup & normalized average objective improvement: I would say the normalized average objective improvement is also a little misleading, and it abstracts away a lot of information.\\u201d\\n\\n--> We have 54 tasks (with 3 tasks per benchmark this would still be 24 tasks) some of which have different underlying metrics, for each we have multiple repeats, and we look at these over 9 different budget combinations. We are aware of four different approaches in the literature to this kind of data situation: ranking based, standardized improvement, normalized regret, and speedup. We would argue that aggregations (at different levels) of speedup and standardized objective improvement are the most informative of these. In order to guard against losing information by the aggregation across benchmarks, we also provide results for individual benchmarks for speedup in Appendix B and have added this for the improvement in Appendix D. Do you know a better approach to analyzing this data? If so, we would be happy to implement it for the final version of the paper.\\n\\n\\n12. \\u201cPerhaps more details should be provided as to how it was computed (or a citation provided for the reader unfamiliar with glass delta), for instance you show the distribution (over repetitions?) of the average improvement on the 8 benchmarks. What is the distribution of the violin plot computed, and what about the mean of the new algorithm?\\u201d\\n\\n--> We have added an explanation of how the metric is computed for each task, and details for how we aggregate (same as for the speedup plots) to Appendix D. We now provide results for two aggregation levels: the distribution over task standardized mean improvements, where we compute the standardized mean improvement over repetitions and show results on a per-benchmark level; and the distribution over benchmark means, i.e., the means across task standardized mean improvements for all tasks in a given benchmark.\\n\\n\\n13. \\u201cAlso once again here, as in many other places in the paper, you fall victim to your choice of two-sided violin plots which only allow for the comparison of two methods in one graph.\\u201d\\n\\n--> Thank you for bringing this up again. We have modified our violin plots to show multiple methods. Thank you for the helpful suggestion, we believe this makes the visual presentation of the results a lot stronger.\"}",
"{\"title\": \"response to rebuttal\", \"comment\": [\"I would like to thank the authors for a very detailed rebuttal. I have read everything, and I have some further comments (for now or future revisions of the paper).\", \"It is nice that you replaced TPEs with the more widely adopted (or state of the art?) GPs, but merely replacing one baseline by the another does not qualify as \\\"increasing the number of baselines\\\".\", \"I am not convinced by your argument on surrogate benchmark usage. I know that they only represent half of the benchmarks, yet the point is that they might present a risk of returning a false result. It doesn't matter if a benchmark suite is more widely available due to lower computing costs if it is at risk of returning incorrect results. A couple of non-simulated experiments could help persuade a reader that this is not the case, and the computing budget of training SVMs of low and mid-sized datasets is far from being prohibitive.\", \"On speedup & normalized average objective improvement: I would say the normalized average objective improvement is also a little misleading, and it abstracts away a lot of information. Perhaps more details should be provided as to how it was computed (or a citation provided for the reader unfamiliar with glass delta), for instance you show the distribution (over repetitions?) of the average improvement on the 8 benchmarks. What is the distribution of the violin plot computed, and what about the mean of the new algorithm? Also once again here, as in many other places in the paper, you fall victim to your choice of two-sided violin plots which only allow for the comparison of two methods in one graph.\"]}",
"{\"title\": \"Happy to Include More Discussion\", \"comment\": \"Thank you for your kind words and for your additional suggestion. We are currently assessing where this additional discussion would be best placed. We already have a large related work section (almost 2 pages), but we would be very happy to include additional discussions and elaborations you see as missing in the remaining author response time. We also realize that our related work section comes relatively late in the paper, and we are wondering whether your concern perhaps would best be addressed by featuring the comparison to existing hyperparameter transfer work more prominently, i.e., already in Section 2 instead of in the related work section? We\\u2019ll gladly implement the changes you suggest.\"}",
"{\"title\": \"After reading the authors comments and revision\", \"comment\": \"I see now that the novelty of the paper was not in the suite of surrogates to a benchmark suite of tasks, therefore a few of my comments do not apply. Instead the paper seems to be formalizing a task that most practitioners are very familiar with, namely tuning a black-box after making minor modifications to the algorithm or hardware. I can see some value in such a formalization. Furthermore, having Transfer GP as a good baseline for the benchmark is valuable; although that approach can certainly use a much better explanation.\\n\\nOverall, I'll increase my score to allow for some discussion.\"}",
"{\"title\": \"We Reply Above\", \"comment\": \"We disagree and have replied to your response above (https://openreview.net/forum?id=WPO0vDYLXem¬eId=ViaG_ysLD-y).\"}",
"{\"title\": \"Our benchmarks do not only modify the search space\", \"comment\": \"Thank you for reading our detailed response and your further comments.\\n\\nNo, our benchmarks do not only modify the search space X (see Table 1; adjustments labelled as homogeneous do not change the search space). E.g., in XGB-B (which changes four constants) the search space does in fact not change at all across our simulated developer adjustment. Most other benchmarks include developer adjustments that do not change the search space, e.g., in FCN-A (which among other changes increases the number of epochs), or FCN-B (which among other changes includes the change to a different learning rate schedule).\\n\\nIs your issue that we simulate developer changes by defining a larger search space and then define one part of the search space as \\u201cversion 1\\u201d and a different part of the search space as \\u201cversion 2\\u201d (in addition to e.g., changing the #epochs)? If so, any changes to an algorithm can be seen this way.\\n\\nAlso, in our initial reply we already relate to search space learning and unbounded Bayesian optimization: We discuss why for our benchmarks search space learning (Perrone et al, 2019) modified to work with heterogeneous adjustments, is equivalent to only-optimize-new (except that that technique is only defined on continuous dimensions and simply ignores categoricals); we therefore already do provide that comparison for benchmarks without categorical hyperparameters in the initial HPO. We also discuss why unbounded Bayesian optimization does not apply to our benchmarks (there is no transfer learning and categoricals have no bounds). Unbounded BO is an alternative to numerical range adjustments, not an alternative to HT-AA algorithms.\\n\\nWe would very much like to provide a comparison against any method the reviewer suggests, if feasible even during the limited time remaining in the author response period, and more comprehensively for the final version, and we are standing by to hear which one we should compare to.\"}",
"{\"title\": \"Author's responses\", \"comment\": \"I appreciate the authors' detailed response of my questions. My concerns are mitigated a bit by the new revision of the paper that shows better results/performance than the original paper. I think this is an interesting research topic, however, I'd like to see the paper highlights more its differences with prior studies and contributions to this research area. I keep my score unchanged.\"}",
"{\"title\": \"No change\", \"comment\": \"Your response contains a lot of \\\"could\\\", while all you are doing in these benchmarks is to modify the search space of hpo. I hope you understand what my concern is.\\nIf you want to do something new, then please do it and do not just suggest it is being done. Create more convincing benchmarks, where real developer changes are happening. Do not just suggest that changes to the search space can be interpreted this way.\"}",
"{\"title\": \"Read author response\", \"comment\": \"I read the verbose author response, thanks for putting this together.\\nI have to say I am still not convinced. The authors seem to claim this is a novel setup, and I just do not see a proper justification. Their use cases are instances of modifying the search space, and there is enough prior work on that one. They cannot avoid having to compare against such work by just claiming what they do is novel and different.\\n\\nThey have two options. Either recognize this is more or less an instance of modifying the search space plus some transfer, and then provide a proper comparison. Or come up with much more convincing benchmarks, that would really convince me of them not just being an instance of changing the search space.\\n\\nNo change in my vote.\"}",
"{\"title\": \"Initial Response 1/2\", \"comment\": \"Thanks for your comments, suggestions, and for recognizing the practical relevance of the proposed problem setting. In the following we provide detailed replies to your questions and comments.\\n\\n\\n1. \\u201cAll work is based on TPE, which is frequently used, but not SotA for HPO. [...] Why not also use GP-BO?\\u201d\\n\\n--> We have replaced TPE with GP for all transfer and non-transfer approaches. This update results in a much faster non-transfer baseline compared to TPE, and at the same time, the GP based transfer approaches provide a larger speedup over GP than the TPE based transfer approaches did over TPE, leading to an overall much larger speedup. Thank you for the proposal, we think this made the paper a lot stronger!\\n\\n\\n2. \\u201cThe paper does not elaborate on the motivations of these \\\"developer changes\\\". One could suspect many are attempts to modify/improve the HPO process itself, over the same model. And this is not really new, there is lots of prior work to help shaping search spaces, both by quantifying HP relevance or by learning search ranges. Say, a developer modifies the value range of an HP. What other motivation would there be than mistrust in the previous range, but no change of algorithm. Same for adding/removing an HP, which normally just means going from fixed default to HPO or back. In fact, the 8 benchmarks are all of that sort. My feeling is that by viewing the problem in this way (namely, just HP search space optimization), there is suddenly a lot more related work not taken into account here.\\u201d\\n\\n--> As we think this is an important discussion and you have signaled that this is your main concern, we will answer this question on the possible motivations of developer changes with great detail. As there are so many potential motivations for developer adjustments, we created a separate comment thread that serves as an appendix and answers your question by listing motivations for each adjustment type we differentiate, which of our benchmarks include a given type of adjustment, and example motivations for the adjustments of each of our benchmark. In addition to this appendix, in this main response, we relate our framework (HT-AA) to search space learning and unbounded hyperparameter optimization.\\n\\n\\nFirst, let us introduce some notation: The objective f(A(x), T, H) that the hyperparameter optimization algorithm tries to optimize depends on three entities: the learning algorithm A(x) instantiated with hyperparameters x \\\\$\\\\in$ X, the task T (in supervised learning this would be a train- and validation/test dataset, and the evaluation metric), and the hardware H the algorithm is run on. As a reminder, in our paper we differentiate between heterogeneous and homogeneous developer adjustments: Heterogeneous adjustments change X and potentially also A or H; while homogeneous adjustments do not change X, but change at least one of A or H.\", \"learning_search_spaces\": \"approaches like the one of Perrone et. al. (2019) learn to prune X by using a meta training set of tasks {T}, i.e., they are approaches for hyperparameter transfer across tasks (HT-AT). As such they are evaluated using a large number of meta training tasks. These could in principle be adapted for HT-AA, by pruning only the part of the search space that was already present before the developer adjustment, similarly to how T2PE (in the revised version TGP as we use GPs instead of TPE now, based on the reviews) builds a model only for the already existing hyperparameters. However, in the basic HT-AA problem we only have data for one task T for one development step, i.e., only one previous run in general. This is problematic, as these approaches might prune parts of the search space that are only good after the developer adjustments and would have no way to correct this (Perrone et. al. (2019); Wistuba et. al. (2015) for categoricals). Applying e.g., the approach of Perrone et. al. (2019) with the adaptation to HT-AA described above, to a search space of continuous hyperparameters across a homogeneous adjustment, is equivalent to only-optimize-new (which performs horribly in our experiments). The above discussion also highlights some of the issues in adapting approaches designed for a transfer across tasks (HT-AT) to a transfer across developer adjustments (basic HT-AA).\\n\\nPerrone, V., Shen, H., Seeger, M. W., Archambeau, C., & Jenatton, R. (2019). Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning. In Advances in Neural Information Processing Systems (pp. 12771-12781).\\n\\nWistuba, M., Schilling, N., & Schmidt-Thieme, L. (2015, September). Hyperparameter search space pruning\\u2013a new component for sequential model-based hyperparameter optimization. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 104-119). Springer, Cham.\"}",
"{\"title\": \"Initial Response 2/2\", \"comment\": \"Unbounded HPO: The literature on unbounded HPO (some citations below) does not take a transfer into account, and therefore assumes an already evaluated hyperparameter setting to not change in objective value (other than perhaps noise). This is problematic, as homogeneous and heterogeneous adjustments can change any part of the algorithm A or hardware H, and hence change the value of f(A(x), T, H) without changing x. Also, the concept of unbounded HPO does not apply to categorical hyperparameters (there are no bounds), of which we have many in our experiments and in representation learning in general.\\n\\nShahriari, B., Bouchard-C\\u00f4t\\u00e9, A., & Freitas, N. (2016, May). Unbounded Bayesian optimization via regularization. In Artificial intelligence and statistics (pp. 1168-1176).\\n\\nHa, H., Rana, S., Gupta, S., Nguyen, T., & Venkatesh, S. (2019). Bayesian Optimization with Unknown Search Space. In Advances in Neural Information Processing Systems (pp. 11795-11804).\\n\\nNguyen, V., Gupta, S., Rana, S., Li, C., & Venkatesh, S. (2019). Filtering Bayesian optimization approach in weakly specified search space. Knowledge and Information Systems, 60(1), 385-413.\\n\\nIf you are satisfied with these elaborations we would be happy to include parts of it in the paper and/or appendix. Would that be fine?\\n\\n\\n3. \\u201cThis paper does not really propose new methodology, except maybe T2PE, which is a pretty basic heuristic. There is a lot of prior work on transfer HPO, some of which could certainly be adopted. Given that the paper is mainly empirical, one would expect a more thorough and wider evaluation.\\n\\n--> We want to note that while there is a lot of prior work on HPO transfer across tasks, we are the first to draw attention to the problem of hyperparameter transfer across adjustments (the task does not change but the algorithm, hardware, and/or search space). A paper with the main goal of drawing attention to a new problem requires a different empirical treatment compared to a paper that better addresses a known problem. In our reply to all reviewers above, we elaborate on the goals of our empirical evaluation. As for adapting existing algorithms for hyperparameter transfer across tasks to this new setting, (or even for a transfer across tasks and across adjustments), we see this as an exciting future research direction -- one among many that our paper gives rise to.\\n\\n\\n4. \\u201cOnly the best-first baseline works well, results for the others are not shown.\\u201d\\n\\n--> This must have been a misunderstanding, as we do show results for all methods we describe: We compare T2PE (in the revised version TGP as we use GPs instead of TPE now, based on the reviews) with best-first in Figure 3, and T2PE with best-first+T2PE in Table 2. Violin plots for best-first+T2PE could be found in Appendix D (now in the main paper). For drop-unimportant and only-optimize-new we can not show speedups, as these approaches fail to reach the target objective in a very large percentage of cases. Instead we showed these failure rates in Figure 4 (now Figure 5).\\n\\n\\n5. \\u201cWhile the paper categorizes types of modification, the empirical evaluation does not differentiate among them anymore.\\u201d\\n\\n--> We added a differentiation to Table 1. For the analysis itself we do not differentiate among the categories though, as many benchmarks have multiple categories of adjustments. Having said that, we want to point out that in Appendix B, we show performance on a per-benchmark-basis.\\n\\n\\nThanks again for all your comments! If we cleared up some of your concerns, we would kindly ask you to update your assessment.\"}",
"{\"title\": \"Appendix: Motivations for Developer Adjustments 1/2\", \"comment\": \"First, let us introduce some notation: The objective f(A(x), T, H) that the hyperparameter optimization algorithm tries to optimize depends on three entities: the learning algorithm A(x) instantiated with hyperparameters x \\\\in X, the task T (in supervised learning this would be a train- and validation/test dataset and the evaluation metric), and the hardware H the algorithm is run on.\", \"in_our_paper_we_differentiate_between_heterogeneous_and_homogeneous_developer_adjustments\": [\"Heterogeneous adjustments change X and potentially also A or H; while homogeneous adjustments do not change X, but change at least one of A or H. In the following we further differentiate these adjustment types, to then discuss motivations and list which benchmarks in our benchmark suite are examples of a given type.\", \"Homogeneous adjustments that change A (and optionally H):\", \"These could change any arbitrary part of algorithm A, as long as this change does not affect the search space X.\", \"Motivations could be: fixing a bug, adding support for different hardware H (e.g., TPU, different robotic embodiment, \\u2026), enabling use of special hardware subroutines, rewriting the complete implementation, changing any constants in the code, improving or changing any part of the learning model, optimization routine, or dataloading, etc. .\", \"The following benchmarks include such adjustments:\", \"FCN-A (Increase #units-per-layer 16\\u00d7; Double #epochs;Fix batch size hyperparameter). A scenario for this benchmark could e.g., be that a developer received a more powerful GPU, and could hence increase the model size and training time, but had to fix the batch size to fit the larger model into GPU memory.\", \"FCN-B (Add per-layer choice of activation function; Change learning rate schedule). A scenario for this benchmark could e.g., be that a developer has finally implemented a learning rate schedule that is known to perform good for the task T, and at the same time performs a heterogeneous adjustment.\", \"XGB-A (Change four unexposed booster hyperparameter values). This could occur e.g., when a developer changes the default values of a used library to settings used in the literature.\", \"SVM-A (Change kernel; Remove hyperparameter for old kernel; Add hyperparameter for new kernel). E.g., when a visualization of the data clearly shows that a radial kernel makes no sense, but a polynomial kernel does.\", \"Homogeneous adjustments that change H (and optionally A):\", \"These could change any part of the hardware the algorithm is run on\", \"Motivations could be: Change the robotic embodiment to use stronger actuators, change the material of the tires of the robot, use a more powerful CPU to preprocess the data, run on a bigger number of GPUs and change the data parallelisation mode, increase RAM to avoid loading bottlenecks, move to TPU, change to a GPU with hardware subroutines for sparse neural networks, change to a GPU with hardware subroutines for low precision arithmetics, etc.\", \"We do not include any hardware adjustments in our benchmarks, since we did not have access to different hardware environments, but we are committed to doing so in the future. A concrete path to this just appeared by means of a new hardware-aware NAS benchmark HW-NAS-Bench (a parallel ICLR submission: https://openreview.net/forum?id=EohGx2HgNsA), which would allow changing between six different hardware platforms (note that the paper also shows that different architectures have optimal tradeoffs on different hardware, i.e., the architectural hyperparameters need to change after a change of hardware).\"]}",
"{\"title\": \"Appendix: Motivations for Developer Adjustments 2/2\", \"comment\": [\"Heterogeneous adjustments that change X by changing the range for one or multiple hyperparameters (and optionally other changes to X, A, or H):\", \"Motivations in the case of numerical hyperparameters could be: optimizing the search space, moving to a GPU with larger memory so that e.g., the batch size or model size can now be larger, changing from a discrete range to a continuous range after being made aware of the possibility of a relaxation of a hyperparameter, etc.\", \"Motivations in the case of categorical hyperparameters could be: removing a choice that has a bug, adding a choice after implementing the corresponding code in A (e.g., new type of optimizer or kernel), moving to hardware H that allows for additional choices (e.g., different arithmetic precisions or representations), \\u2026\", \"The following benchmarks include such adjustments for numerical hyperparameters:\", \"SVM-B (Increase range for cost hyperparameter): E.g., search space optimization.\", \"The following benchmarks include such adjustments for categorical hyperparameters\", \"FCN-B (Add per-layer choice of activation function; Change learning rate schedule) E.g., when a developer implements additional activation functions and now wants to optimize over them, and at the same time performs a homogeneous adjustment.\", \"NAS-A (Add 3x3 average pooling as choice of operation to each edge): E.g., after learning about the idea of average pooling, the developer adds this to the NAS search space.\", \"Heterogeneous adjustments that change X by adding or removing one or multiple hyperparameters (and optionally other changes to X, A, or H):\", \"Motivations could be: unfixing/exposing an existing hyperparameter, fixing an existing hyperparameter like the batch size after moving to a GPU with less memory, fixing an existing categorical hyperparameter because a bug has been found in one of two choices, a part of the algorithm A was changed and the new version of that part includes one or multiple new hyperparameters (e.g., when changing SGD to ADAM, when updating to a GPU that has support for low precision arithmetics, or when changing the NAS cell template), a part of algorithm A was changed and the new version of that part does not include certain hyperparameters anymore.\", \"The following benchmarks include such adjustments:\", \"FCN-A (Increase #units-per-layer 16\\u00d7; Double #epochs; Fix batch size hyperparameter) E.g., a developer received a more powerful GPU, and could hence increase the model size and training time, but had to fix the batch size to fit the larger model into GPU memory.\", \"FCN-B (Add per-layer choice of activation function; Change learning rate schedule): E.g., when a developer implements additional activation functions and now wants to optimize over them, and at the same time performs a homogeneous adjustment.\", \"XGB-A (Expose four booster hyperparameters): The developer learns that certain hyperparameters are important to tune, and hence does not use the default values of the library anymore.\", \"NAS-B (Add node to cell template (adds 3 hyperparameters)). E.g., when moving to a larger GPU that can fit a larger neural network into memory.\", \"SVM-A (Change kernel; Remove hyperparameter for old kernel; Add hyperparameter for new kernel). E.g., when a visualization of the data clearly shows that a radial kernel makes no sense, but a polynomial kernel does, however, now the degree of the polynomial needs to be tuned (which did not exist as a hyperparameter before), and the hyperparameter for the radial kernel is dropped.\", \"We hope that this extensive list demonstrates the broad applicability of HT-AA, and that it is by no means limited to search space optimization. If this broad applicability of our problem formulation changes a reviewer\\u2019s mind, we would kindly ask them to update their score accordingly.\"]}",
"{\"title\": \"Aspect of Introducing a New Problem\", \"comment\": \"We thank all reviewers for their helpful comments! We are glad that the new problem we proposed has been recognized as \\u201cvaluable\\u201d, \\u201cinteresting\\u201d, \\u201cclearly introduced\\u201d, and \\u201cquite relevant in practice\\u201d by the reviewers. We reply to the concerns of each reviewer separately in detail, and here, we comment on points that we see as relevant to all reviewers.\\n\\nAs reviewers are rarely in the situation where they review a paper with the main goal of drawing attention to a new problem setting, we would like to point out that the ICLR reviewing guidelines (https://iclr.cc/Conferences/2021/ReviewerGuide#step-by-step; 2.1) state that such papers, compared to papers that e.g., try to better address a known problem, require different considerations as to potential value and impact. We believe that a strong paper that introduces a problem (1) is centered around an interesting problem that is relevant in practice, (2) opens the gate for many new research opportunities, and (3) lays a solid foundation for further research on the problem. We believe that our paper fulfills all of these:\\n\\n1. The reviewers clearly recognize hyperparameter transfer across adjustments (HT-AA) as an important problem that is relevant in practice.\\n\\n2. In Section 6 (Related Work and Research Opportunities) we discuss some of the many research opportunities in extending the HT-AA framework, designing improved algorithms for HT-AA, and in applying the idea of automated knowledge transfers across developer adjustments to (meta) learning scenarios other than HPO. Additional ideas for future research directions are also given by the reviewers (e.g. HT-AA for ensemble learning). We believe the many opportunities for future research based on HT-AA clearly highlight the potential impact our paper can have.\\n\\n3. To lay a solid foundation, the goal of our empirical study should not be to propose the best approach to this new algorithmic problem, or even to introduce exciting methodology, but (a) to provide code to researchers for cheap and hardware-independent benchmarks, and well-vetted baselines to base future work on, (b) to provide strong evidence for the advantage of automatically transferring knowledge about hyperparameters across adjustments, and (c) to evaluate how well actually-practiced manual strategies might work.\\nWe deliver on all of these. In fact, the parts about baselines and evaluating actually-practiced strategies tie in with the question several reviewers asked of why we include the approaches only-optimize-new and drop-unimportant in our paper, even though they perform badly in the experiments. We do not propose only-optimize-new and drop-unimportant as new methods, but we rather include them as actually-practiced methods, alongside the straightforward best-first strategy and T2PE (now TGP). While best-first, T2PE/TGP, and the combination of these two work surprisingly well, only-optimize-new and drop-unimportant work horribly, even though dropping unimportant hyperparameters is a strategy practiced in a non-algorithmic fashion (e.g., in the seminal work on AlphaGO), and to only tune new hyperparameters is certainly widespread as well. We do not drop the evaluation of these two approaches as the negative results are evidence that actually-practiced manual approaches to the HT-AA problem perform worse than simply not doing any transfer at all. We changed the paper to more clearly highlight the points above.\"}",
"{\"title\": \"Paper Revisions After Reviews\", \"comment\": [\"We performed the following major revisions in response to the reviews:\", \"Based on the reviewers\\u2019 comments, we now base our evaluation on Gaussian processes (GPs). The HPO baseline now uses GPs and all transfer approaches do as well. We refer to the GP-based version of Transfer TPE (T2PE) as Transfer GP (TGP). Since GPs work much better on these low-dimensional problems, this modification results in a much faster non-transfer baseline compared to TPE; at the same time, the GP-based transfer approaches provide a larger speedup over GP than the TPE based transfer approaches did over TPE, leading to an overall much larger speedup. We believe this makes the paper much stronger and thank the reviewers for the suggestion.\", \"We have dropped the range adjustment part of what was formerly T2PE (now TGP) to make it as simple and easy-to-implement as our other baselines. TGP is now similar to only-optimize-new, but instead of using the best previous values of already existing hyperparameters, the algorithm uses a Gaussian process fitted on the projected results of the previous HPO. This change had little effect on the performance of TGP.\", \"Further, we performed the following minor revisions in response to the reviews:\", \"We have added missing details for our benchmark suite to Section 4 and Appendix A, and make it clearer that we use code and data from existing HPO benchmarks.\", \"We now list the adjustment type for each adjustment in our benchmarks in Table 1.\", \"We more clearly highlight why we include only-optimize-new and drop-unimportant, the two approaches inspired by actually-practiced manual strategies, in our evaluation.\", \"We now feature the combination of TGP (formerly T2PE) and best-first more prominently in the experiments section and show the per-benchmark performance in Appendix B.1.\", \"We improved the description for the comparison of GP (formerly TPE) across different seed ranges (Appendix E).\", \"We improved the description for the comparison of GP (formerly TPE) to random search and added a comparison of GP to TPE (Appendix F).\", \"We added an explanation for why we use the geometric mean to aggregate the speedups instead of the arithmetic mean.\", \"We added an explanation for why we use violin plots.\"]}",
"{\"title\": \"Initial Response 1/2\", \"comment\": \"Thanks for your helpful comments and suggestions.\\n\\n\\n1. \\u201cThis manuscript introduces a benchmark suite of hyperparameter optimization tasks, that simulate slight developer modifications, with the goal of optimizing a slightly modified algorithm given knowledge of its previous form. The paper further studies the performance of some deliberately naive approaches as a performance benchmark to accompany the suite.\\u201d\\n\\n--> This summary is not in line with what we present as our main contribution: \\u2018the introduction of a new research framework: Hyperparameter transfer across adjustments (HT-AA)\\u2018, which is very prominently featured in the abstract and introduction. Our main contribution is not a benchmark suite, but the introduction of a new *problem* that could be as impactful as the problem of hyperparameter transfer across tasks (HT-AT). This distinction is rather important for the reviewing process, as papers that introduce problems, compared to papers that e.g., try to better address a known problem, require different considerations as to potential value and impact. In our reply to all reviewers we discuss this aspect in detail.\\n\\n\\n2. \\u201cMy understanding is that these are surrogate models that are meant to simulate a real-world task. However there is no description as to how these surrogates were created or trained, nor how their fidelity to the original task was vetted. [...] My recommendations would be to [...] focus on vetting the surrogates in terms of their fidelity to the task they are meant to simulate; [...]\\u201d\\n\\n--> Half of our benchmarks (4 out of 8) do not use surrogate models but lookup tables. The SVM and XGB benchmarks are surrogate benchmarks and FCN and NAS benchmarks are tabular benchmarks (for an explanation of the difference we refer to Section 4). While in the original submission we provided citations and said that some benchmarks are based on surrogates and some on tabular data, we have now added this information explicitly. Further, all our benchmarks use code and runtime data from existing benchmarks in HPO research. While we have made this clearer in our revision, we would like to note that our original version already mentions this and includes respective citations in Section 4. As for the surrogates, we use an available open-source implementation by the HPOlib authors, which we now explicitly mention in Appendix A. Finally, in the case of the two NAS benchmarks the search space is based purely on categorical hyperparameters (i.e., which operation to apply on a given edge in the architecture graph) and objective values for all potential hyperparameter settings are in the lookup table. Therefore, the two NAS benchmarks are true to HPO for non-simulated code. Qualitatively, the results on the NAS benchmarks are similar to what we see across all benchmarks (Appendix C).\\nWe believe that the reviewer might have misunderstood the nature of our benchmarks, so let us clarify: regardless of what available open-source benchmark (tabular or surrogate) we based our HT-AA benchmarks on, we never fit a surrogate. We simply use one part of the table (/one part of the surrogate model\\u2019s search space) to define one \\u201cversion 1 of the code\\u201d and another part for \\u201cversion 2 of the code\\u201d, and study the transfer from version 1 to version 2.\"}",
"{\"title\": \"Initial Response 2/2\", \"comment\": \"3. \\u201cThe most simple and naive algorithm seems to provide similar speed-ups to the much more complicated proposed T2PE; especially when considering the much larger improvement of the naive method (see Fig. 11), none of the other proposed methods seem justified to me. [...] My recommendations would be to: [...] focus on demonstrating that accounting for the slight modifications in the subsequent HPO does indeed provide a benefit over the naive thing to do. For the record, I don't think this is an easy task.\\u201d\\n\\n--> First, we refer to the reply to all reviewers where we elaborate on the goals of our empirical evaluation, and explain in detail why the empirical evaluation of only-optimize-new and drop-unimportant is quite valuable to the community. Second, we have dropped the range adjustment part of T2PE (in the revised version TGP as we use GPs instead of TPE now, based on the reviews) to make it much simpler. TGP is now like only-optimize-new, but instead of using the best previous values of previously existing hyperparameters at each step, the algorithm samples from a model fitted on the projected results of the previous HPO. Finally, we actually do provide an approach that does indeed yield a benefit over best-first (which is not an easy task!), i.e., the combination of best-first and T2PE (in the revised version TGP). This approach provides an additional 0.1-0.3 average speedup where the number of evaluations for the previous HPO is larger than 10 (Table 2). We made changes to feature this result more prominently and added per-benchmark results for the combination of best-first and TGP to Appendix B (showing that the speedup of TGP+best-first over best-first is up to a factor of 1.5x in some benchmarks; see e.g., Figure 10, FCN-B and NAS-A).\\n\\n\\n4. \\u201cMy recommendations would be to: focus on creating a good benchmark suite (perhaps focus on a single or two domains as introduce many variants, instead of four domains with only two variants); [...] \\u201c\\n\\n--> Our benchmark suite (a) is based on code and data from commonly to widely used benchmarks in HPO research, (b) covers a wide range of algorithms, (c) includes developer adjustments of many different types and with many motivations [see our response to AnonReviewer4 point 4(which includes the large appendix comment \\u201cAppendix: Motivations for Developer Adjustments\\u201d)], (d) includes many tasks for each algorithm and adjustment, and (e) is independent of hardware and comparatively cheap to evaluate. We think these are the attributes of a high quality benchmark suite. \\n\\n\\n5. \\u201cJustify geometric mean. I'm not saying it's the wrong way to compare these, I just think it requires at least a sentence of justification.\\u201d\\n\\n--> We agree and have added a justification to our paper. Intuitively, the geometric mean is an average of speedup values. E.g., two speedups of 0.1x and 10x intuitively average to 1x, and not 5.05x. We want to note that the arithmetic mean is an upper bound for the geometric mean, so using the geometric mean in fact makes our speedups slightly less impressive than had we used the standard arithmetic mean.\\n\\n\\n6. \\u201cSame for the violin plots [justification]. For such simple plots, simple boxes and whiskers, with perhaps data points to show the spread of measurements across seeds, would do just fine.\\u201d\\n\\n--> We agree that we should explain why we use violin plots and have added a justification to the results paragraph in Section 5. The main advantage of using violin plots is being able to take into account potential multi-modality of the data distribution. E.g., in Figure 3 the geometric mean of the best-first approach is not always at a point of high density and the distribution over 8 benchmarks has multiple modes. While stated in the respective captions, here, we also want to note that our plots show either violins over benchmark geometric means, or violins over task geometric means for each benchmark. The seeds are averaged for each task individually.\\n\\n\\n7. \\u201cFigure 4, and indeed any mention of the two methods therein, can be entirely removed from the paper; other than to perhaps mention that they were tried and failed---results in the appendix.\\u201d. A much more interesting replacement for that figure would be Figure 11.\\n\\n--> In the interest of brevity, we refer to the reply to all reviewers, where we have answered this question in detail.\\n\\n\\n8. \\\"Not sure what is the point of comparing random search to TPE in the appendix unless this means Best-first then Random-search/TPE? If the latter is true, please clarify.\\\"\\n\\n--> One reviewer asked how reliable TPE is. A comparison to random search answers this. We now explicitly state this in the description. Please note that we now also include GPs in our evaluation which provide much larger speedups over random search.\\n\\n\\nThanks again for all your comments! If we cleared up some of your concerns, we would kindly ask you to update your assessment.\"}",
"{\"title\": \"Initial Response 1/2\", \"comment\": \"Thanks for the insightful comments and questions, and the suggested improvements! Thanks also for the positive feedback on our writing, the clear introduction of the new HT-AA framework, our extensive experiments, and the value HT-AA provides. We would like to comment on your suggestions, comments and questions in the following.\\n\\n\\n1. \\u201cComparisons with more baselines would be beneficial. RF and/or GP-based HPO methods are extremely popular and would have been easy to integrate with the best-first baseline.\\u201d\\n--> We have replaced TPE with GP for all transfer and non-transfer approaches. This update results in a much faster non-transfer baseline compared to TPE, and at the same time, the GP-based transfer approaches provide a larger speedup over GP than the TPE-based transfer approaches did over TPE, leading to an overall much larger speedup. Thank you for the proposal, we think this makes the paper much stronger.\\n\\n\\n2. \\u201cYou do not specify which benchmarks are based on lookup tables and which ones are based on surrogate models. From looking at the search spaces, I would assume that the SVM and XGB benchmarks are modeled via surrogate benchmarks and the FCN and NAS benchmarks are lookup tables, but this should be explicited in the paper (or appendix). Parameters used for the benchmark surrogate model should also be given (if defaults of Eggensperger are used, simply mention this).\\u201d\\n--> Yes, SVM and XGB benchmarks are surrogate benchmarks and FCN and NAS are lookup tables. We have added this information to Section 4 of the paper. All our benchmarks use code and runtime data from existing benchmarks in HPO research. While we have made this clearer in our revision, we would like to note that our original version already mentions this and includes respective citations in Section 4. As for the surrogates, we use an open-source implementation by the HPOlib authors, which we now explicitly mention in Appendix A.\\n\\n\\n3. \\u201cIt is also not clear what underlying datasets are used, this bears some importance and should be mentioned, even if only in the Appendix.\\u201d\\n--> We agree and have added this information to Appendix A.\\n\\n\\n4. \\u201cThe use of simulated benchmarks with surrogate models introduces noise in the evaluation. [...] Have you compared experiments with a few runs on a real benchmark?\\u201d\\n--> Half of our benchmarks (4 out of 8) do not use surrogate models but lookup tables. Additionally, our surrogate benchmarks are modifications of commonly used benchmarks in hyperparameter transfer across tasks. Without the use of simulated benchmarks, research on HPO and especially transfer scenarios for HPO can be prohibitively expensive for underrepresented groups with poor compute resources, and using simulated benchmarks is a standard practice to avoid this. The use of simulated benchmarks is particularly important in our case, as we need to establish a solid foundation for further research, which includes providing cheap and hardware-independent benchmarks. Having said that, in the case of the two NAS benchmarks the search space is based purely on categorical hyperparameters (i.e., which operation to apply on a given edge in the architecture graph) and objective values for all potential hyperparameter settings are in the lookup table. Therefore, the two NAS benchmarks are equivalent (but cheaper) to HPO runs on real code. Qualitatively, the results on the NAS benchmarks are similar to what we see across all benchmarks (Appendix B).\"}",
"{\"title\": \"Initial Response 2/2\", \"comment\": \"5. \\u201cThe contributions are simple and incremental, and clearly rooted in machine learning engineering, however I still think they could be beneficial as a whole to the community given the extensive experiments realized.\\u201d\\n--> Thank you for the characterization as beneficial as a whole to the community. However, we consider our contributions not as incremental, as we introduce a practically relevant and fundamentally novel research problem.\\n\\n\\n6. \\u201cThe method you end up recommending only has its detailed performance shown in the appendix. This feels counterintuitive to me. This result should be featured in the paper itself.\\u201d\\n--> We agree. We changed the paper to feature the recommended method more prominently.\\n\\n\\n7. \\u201cI think it is misleading to portray everything in terms of speedup or improvement over the \\\"TPE solution with X iterations\\\". A more strictly meaningful metric here is accuracy (assuming there is only one dataset per benchmark). [...] I can't seem to find such figures in the appendices.\\u201d\\n--> We consider multiple datasets per benchmark (see Table 3 in Appendix A) and our benchmarks even use different metrics (now also in Table 3), so we chose speedup as a metric to bridge all these. Additionally, besides the speedup evaluation, we also provide an evaluation of standardized objective improvement over the \\\"TPE/GP solution with X iterations\\\" (with a small delta added, as some standard deviations were 0) in Appendix D (referenced at the end of the experiments section). While this type of evaluation is more common in research on the related hyperparameter transfer across tasks problem, in the main paper we decided to focus on the speedup instead, as the viewpoint of reducing computational demands is ethically stronger (reducing the carbon footprint, etc), and as speedup is easier to interpret than some normalized average objective improvement with a small delta added.\\n\\n\\n8. \\u201cAppendix G, you wrote TPE2 instead of T2PE\\u201d\\n--> Actually, this is not a mistake. In Appendix G we compare TPE with two different seed ranges (denoted TPE and TPE2) to measure the influence of seeds in our evaluation. But we realized that this may have been confusing and renamed the two seed ranges to TPE_1 and TPE_2.\\n\\n\\nThanks again for all your comments! If we cleared up some of your concerns, we would kindly ask you to update your assessment.\"}",
"{\"title\": \"Initial Response\", \"comment\": \"Thanks for your helpful comments, questions, and the positive feedback for our approaches and numerical results, calling our proposed problem setting interesting, and recognizing the potential benefit our paper can bring to ICLR readers. In the following we provide detailed replies to your questions.\\n\\n\\n3. \\u201cNeed explain when the only-optimize-new and drop-unimportant methods can be useful. If not useful as the experiments demonstrate, why propose them?\\u201d\\n\\n--> In the interest of brevity, we refer to the general reply to all reviewers above, where we have answered this question in detail.\\n\\n\\n4. \\u201cLook like TPE is the method that show speedup for the benchmarks. How reliable the method is? Is the saving justifying use an extra tuning tool?\\u201d\\n\\n--> Based on a suggestion by other reviewers, we added Gaussian processes approaches which perform better than TPE. To measure how reliable the basic HPO without transfer is, we compare it to random search in Appendix F. As the cost for HPO scales with the cost of the algorithm it is tuning, an average speedup of 1.2--3.6x (depending on the budgets involved) and a maximum benchmark-average speedup of over 10x can be a significant cost and CO2 saver. As our transfer algorithms fold into the basic hyperparameter tuning tool itself, and do not add any additional required user actions (other than optionally choosing where to save results), using the across-adjustments transfer algorithms does not add any overhead for the user compared to using a tool for basic HPO. As for the maintainer of the tool: the complexity of the approaches we evaluated is deliberately chosen as low, so implementing these approaches is certainly feasible. The simple code for all our tools is open-sourced.\\n\\n\\nThanks again for all your comments! If we cleared up some of your concerns, we would kindly ask you to update your assessment.\"}",
"{\"title\": \"A framework for hyperparemeter transfer when ML algorithm changeshm\", \"review\": \"The paper is motivated by the situation where a machine learning algorithm has development adjustment and we would like to reuse the tuning results of previous hyperparameter optimization. The paper calls it HT-AA problem, which certainly is an interesting problem given software can get updated often. The paper proposes four simple baseline algorithms for the HT-AA problem.\\n\\nFor the empirical study, a set of eight benchmarks for basic HT-AA problem are presented. The experiment results show the transfer TPE (T2PE) and best-first strategy produce good speed up. To reach given objective values, T2PE can be 1.0\\u20131.7x faster than TPE, and best-first 1.2\\u20132.6x faster comparing with old HPO.\", \"the_pros_of_the_paper_include\": \"1. Although the topic of the paper is not one of the most popular, some readers might find it interesting and can be benefited from it.\\n2. The proposed methods are reasonable and acceptable.\\n3. Numerical results show some of the proposed methods can help to speed up reoptimizing hyperparameter.\", \"the_cons_include\": \"1. Need explain when the only-optimize-new and drop-unimportant methods can be useful. If not useful as the experiments demonstrate, why propose them?\\n2. Look like TPE is the method that show speedup for the benchmarks. How reliable the method is? Is the saving justifying use an extra tuning tool?\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Valuable framework, precisions required\", \"review\": \"The authors propose a new framework for hyperparameter optimization and transfer across incremental modifications to a given algorithm and its search space, a process called developer adjustments in the paper. The authors then propose a few strategies to transfer knowledge from previous HPO runs and evaluate them on a series of simulated benchmarks. Results show the value added by transferring information from previous runs, as well as the surprising efficiency of simply reusing the best found hyperparameters from the previous run.\", \"strong_points\": [\"The framework is simple and clearly introduced.\", \"Extensive experiments help bring to light the advantage of transferring across adjustments.\", \"The paper is very well written.\"], \"weak_points\": [\"Not enough details on benchmarks, more on this below\", \"The use of simulated benchmarks with surrogate models introduces noise in the evaluation\", \"Comparisons with more baselines would be beneficial. RF and/or GP-based HPO methods are extremely popular and would have been easy to integrate with the best-first baseline.\"], \"recommendation\": \"The contributions are simple and incremental, and clearly rooted in machine learning engineering, however I still think they could be beneficial as a whole to the community given the extensive experiments realized. I have some issues with experiments, lack of details and baselines, but those issues are mostly fixable. I'll give the paper a weak accept for now.\", \"extra_comments\": \"You do not specify which benchmarks are based on lookup tables and which ones are based on surrogate models. From looking at the search spaces, I would assume that the SVM and XGB benchmarks are modeled via surrogate benchmarks and the FCN and NAS benchmarks are lookup tables, but this should be explicited in the paper (or appendix). Parameters used for the benchmark surrogate model should also be given (if defaults of Eggensperger are used, simply mention this). It is also not clear what underlying datasets are used, this bears some importance and should be mentioned, even if only in the Appendix.\", \"on_surrogate_model_benchmarks\": \"It can be seen in (Eggensperger et al. 2015., Figure 2) that ordering of methods can shift due to noise in the surrogate model (a random forest?). This is likely going to have a bigger impact when trying to measure the speedup, which is measured when a method reaches a certain threshold of performance. This threshold is likely to be met during the convergence phase of algorithms, and this phase appears noisier (i.e. looking at how the phases of transition differ between the true benchmark and the RF surrogate benchmark differ in Eggensperger et al. 2015). Have you given this any thought? Have you compared experiments with a few runs on a real benchmark?\\n\\nThe method you end up recommending only has its detailed performance shown in the appendix. This feels counterintuitive to me. This result should be featured in the paper itself. This is perhaps due to the used of those split violin plots, which force you to display only two methods per plot. Maybe you should display a group of X single-sided violin plots where X is the number of methods you are trying to compare.\\n\\nI think it is misleading to portray everything in terms of speedup or improvement over the \\\"TPE solution with X iterations\\\". A more strictly meaningful metric here is accuracy (assuming there is only one dataset per benchmark). Assuming the performance to beat by original TPE was an 11% error rate, there is a big difference between a method which was able to achieve a 10% error rate and a method which was able to achieve a 5% error rate, yet both will be assessed by how quickly they achieved x < 10% error rate. I can't seem to find such figures in the appendices.\", \"typos\": [\"Section 3.1 page 3, argmax g(x) / b(x) << you mean g(x) / l(x)?\", \"appendix G, you wrote TPE2 instead of T2PE\"], \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Considers problem of warmstarting hyperparameter optimization after small changes to training algorithm and/or HP search space.\", \"review\": \"This paper is addressing a problem which is quite relevant in practice, namely how to warmstart HP optimization after small changes have been done to the ML model. Such changes may modify the HP search space, both by adding/removing HPs, or by changing their value ranges. The paper is clearly written. It introduces 3 potential baselines, as well as a simple transfer strategy. All work is based on TPE, which is frequently used, but not SotA for HPO.\\n\\nThe paper does not elaborate on the motivations of these \\\"developer changes\\\". One could suspect many are attempts to modify/improve the HPO process itself, over the *same* model. And this is not really new, there is lots of prior work to help shaping search spaces, both by quantifying HP relevance or by learning search ranges. Say, a developer modifies the value range of an HP. What other motivation would there be than mistrust in the previous range, but no change of algorithm. Same for adding/removing an HP, which normally just means going from fixed default to HPO or back. In fact, the 8 benchmarks are all of that sort. My feeling is that by viewing the problem in this way (namely, just HP search space optimization), there is suddenly a lot more related work not taken into account here. More difficult problems, such as learning ensembles from a range of models, and then adding/removing model types, are not tackled here. These would call for more difficult transfer strategies.\\n\\nThis paper does not really propose new methodology, except maybe T2PE, which is a pretty basic heuristic. There is a lot of prior work on transfer HPO, some of which could cewrtainly be adopted. Given that the paper is mainly empirical, one would expect a more thorough and wider evaluation. On the positive side, the paper introduces 8 new benchmarks, even though they are pretty simple setups. Their empirical evaluations are a little thin. Only the best-first baseline works well, results for the others are not shown. It should be noted that best-first is standard in HPO practice, this is the first thing one does for transfer. Their T2PE essentially works just as well, and a combination of the two works slightly better. While the paper categorizes types of modification, the empirical evaluation does not differentiate among them anymore. Also, the restriction to TPE is questionable. Why not also use GP-BO? All baselines would work just the same.\\n\\nMy main recommendation for this work would be to be clear about the modification for such limited developer changes. If this is just about the developer trying to twist HPO in itself, this work would have to compare against previous work for optimizing search spaces. Otherwise, please address more complex scenarios, such as ensemble learning, where HPO transfer becomes really difficult.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"This manuscript introduces a benchmark suite of hyperparameter optimization tasks, that simulate slight developer modifications, with the goal of optimizing a slightly modified algorithm given knowledge of its previous form. The paper further studies the performance of some deliberately naive approaches as a performance benchmark to accompany the suite.\", \"review\": [\"Major weaknesses of the paper:\", \"My understanding is that these are surrogate models that are meant to simulate a real-world task. However there is no description as to how these surrogates were created or trained, nor how their fidelity to the original task was vetted.\", \"The most simple and naive algorithm seems to provide similar speed-ups to the much more complicated proposed T2PE; especially when considering the much larger improvement of the naive method (see Fig. 11), none of the other proposed methods seem justified to me.\", \"Furthermore, I don't consider this naive approach (Best First) as being an HPO approach that leverages transfer, since it does exactly what anyone would do when faced with a slightly altered set of hyperparameters.\", \"This last point suggests that at least one of the following must be true:\", \"non-trivial transfer is not as important as intuition would lead us to think;\", \"this benchmark suite does not provide a good testbed for assessing an HPO method's ability to transfer; or\", \"none of the non-trivial proposed algorithms do a good job transferring and can therefore not argue against the previous point.\", \"Given this important contradiction, I must recommend a rejection. My recommendations would be to:\", \"focus on creating a good benchmark suite (perhaps focus on a single or two domains as introduce many variants, instead of four domains with only two variants);\", \"focus on vetting the surrogates in terms of their fidelity to the task they are meant to simulate; and\", \"focus on demonstrating that accounting for the slight modifications in the subsequent HPO does indeed provide a benefit over the naive thing to do.\", \"For the record, I don't think this is an easy task.\"], \"minor_points\": [\"Justify geometric mean. I'm not saying it's the wrong way to compare these, I just think it requires at least a sentence of justification.\", \"Same for the violin plots. For such simple plots, simple boxes and whiskers, with perhaps data points to show the spread of measurements across seeds, would do just fine.\", \"Figure 4, and indeed any mention of the two methods therein, can be entirely removed from the paper; other than to perhaps mention that they were tried and failed---results in the appendix.\", \"A much more interesting replacement for that figure would be Figure 11.\", \"Not sure what is the point of comparing random search to TPE in the appendix unless this means Best-first then Random-search/TPE? If the latter is true, please clarify.\"], \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}"
]
} |
qbH974jKUVy | The role of Disentanglement in Generalisation | [
"Milton Llera Montero",
"Casimir JH Ludwig",
"Rui Ponte Costa",
"Gaurav Malhotra",
"Jeffrey Bowers"
] | Combinatorial generalisation — the ability to understand and produce novel combinations of familiar elements — is a core capacity of human intelligence that current AI systems struggle with. Recently, it has been suggested that learning disentangled representations may help address this problem. It is claimed that such representations should be able to capture the compositional structure of the world which can then be combined to support combinatorial generalisation. In this study, we systematically tested how the degree of disentanglement affects various forms of generalisation, including two forms of combinatorial generalisation that varied in difficulty. We trained three classes of variational autoencoders (VAEs) on two datasets on an unsupervised task by excluding combinations of generative factors during training. At test time we ask the models to reconstruct the missing combinations in order to measure generalisation performance. Irrespective of the degree of disentanglement, we found that the models supported only weak combinatorial generalisation. We obtained the same outcome when we directly input perfectly disentangled representations as the latents, and when we tested a model on a more complex task that explicitly required independent generative factors to be controlled. While learning disentangled representations does improve interpretability and sample efficiency in some downstream tasks, our results suggest that they are not sufficient for supporting more difficult forms of generalisation. | [
"disentanglement",
"compositionality",
"compositional generalization",
"generalisation",
"generative models",
"variational autoencoders"
] | Accept (Poster) | https://openreview.net/pdf?id=qbH974jKUVy | https://openreview.net/forum?id=qbH974jKUVy | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"HyR3OmGeoDp",
"vFukuwd9zuM",
"DkUCb6IKMWG",
"M7_-_GHybBr",
"jVmepU2KAzT",
"NPfK1_8F_iF",
"QEWj7iPeOg",
"QNVTBwcMqwT",
"ArZgdLjTWGt",
"8ZoH8MCEUWt",
"GDzfQutLtLm",
"Lv1HquEsfUu"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040398542,
1606250819562,
1606237336503,
1606237315104,
1606237242147,
1606237194986,
1606237136596,
1606237067889,
1604199535955,
1603942434863,
1603822017854,
1602625024987
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3567/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3567/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3567/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3567/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3567/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3567/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3567/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3567/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3567/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3567/AnonReviewer1"
],
[
"ICLR.cc/2021/Conference/Paper3567/AnonReviewer4"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"The paper seeks to empirically study and highlight how disentanglement of latent representations relates to combinatorial generalization. In particular, the main argument is to show that models fail to perform combinatorial generalization or extrapolation while succeeding in other ways. This is a borderline paper. For empirical studies it is also less agreed upon in general where one should draw the line about sufficient coverage of experiments, i.e., the burden of proof for primarily empirically derived insights. The initial submission clearly did not meet the necessary standard as the analysis was based on a single dataset and studied only two methods (VAE and beta-VAE). The revised version of the manuscript now includes additional experiments (an additional dataset and two new methods), still offering largely consistent pattern of observations, raising the paper to its current borderline status. Some questions remain about the new results (esp the decoder).\"}",
"{\"title\": \"Thanks, I think the paper is substantially improved\", \"comment\": \"Thanks to the authors for the response and for the additional experiments, I think the paper has improved significantly and have updated my rating accordingly.\"}",
"{\"title\": \"Response to AnonReviewer4: Part II\", \"comment\": \"Response to \\u201cHowever, even empirical evaluation... related discussion)\\u201d: Thank you for pointing out the Lampinen & McClelland (2020) paper. This is an interesting perspective that we hadn\\u2019t come across before. We agree, disentanglement may not be the solution to some (or indeed many) tasks. We also agree that it is more difficult to know what factors could be disentangled in more naturalistic settings, and indeed, that is why we focused on the dSprites and 3DImage datasets as a starting point.\\n\\n \\n\\nResponse to \\u201cIt also seems likely... real world data\\u201d: We are not sure we agree with the reviewer that the Stroop effect reflects the impact of entangled representations \\u2013 colour and shape might be disentangled (and indeed there is good evidence for this from Garner interference) with the interference the product of a response conflict. That is, colour and shape representations may be processed separately using disentangled representations, and the delay in responding reflects the conflicting outputs of these two processes (e.g., colour channel outputting \\u201cblue\\u201d and word channel outputting \\u201cred\\u201d). But we do agree that more complicated forms of generalisation may need more than a feed-forward pass. Our findings highlight the limitations of disentangled presentations in this context, and we agree with the reviewer that more complex environments, tasks, and architectures are likely needed.\"}",
"{\"title\": \"Response to AnonReviewer4: Part I\", \"comment\": \"Thank you for your detailed comments and feedback. We respond to each point below:\\n\\nResponse to \\u201cThe relationship of these results... types of generalisation\\u201d: We have now modified Section 1.1, where we discuss the Locatello et al. (2019) paper and their more recent paper (van Steenkiste et al.). These papers find that (a) unsupervised methods cannot always identify disentangled models and (b) disentangled representations do not necessarily lead to decreased sample complexity for downstream tasks. Here, we were focused on the specific case of combinatorial generalisation and its relation with disentanglement. Our new simulation makes this point in the most emphatic manner: even when latent representations are completely disentangled, it does not guarantee combinatorial generalisation. To test this, we left out various combinations from the training dataset, which is quite different to the tests conducted by Locatello et al. (2019). But we agree that there is a close connection between the two studies, and we have now made this clear in the manuscript. \\n\\n \\n\\nResponse to \\u201cThe experiments are very narrow...\\u201d: This was a concern shared by many reviewers. Therefore, we have now run a new set of experiments expanding and replicating our findings. Please see our general response to all reviewers, where we have discussed the new experiments and findings. \\n\\n \\n\\nResponse to \\u201cIt would be useful to show... , perhaps)\\u201d: Thank you \\u2013 this is really useful feedback. We have now done exactly this, replacing Table 1 with Figure 3. If generalisation improves as a result of disentanglement, NLL should decrease as disentanglement increases. This is clearly not the case for any of our test conditions. The only exception to this is the NLL score for the perfectly disentangled (decoder) model in the Recombination-to-Range condition. But even here, the lower NLL value is misleading as the models reconstruct crucial elements of the image (the combination of left-out generative factors) incorrectly. This can be seen by looking at the example reconstructions in Figures 2, 3 and in the Appendix Figures 8 & 10. \\n\\n \\n\\nResponse to \\u201cOne reason... different dimensions\\u201d: We have now added the 3D Shapes dataset and explored different forms of generalisation in these datasets. For example, in the dSprites dataset we explore extrapolation using translation and in the 3D dataset we explored extrapolation along the color dimension of the floor. \\n\\n \\n\\nResponse to \\u201cIncreasing realism... more explicitly\\u201d: We thank the reviewer for pointing out the Hill et al. (2020) paper. The paper makes the important point that generalisation is better (albeit still limited) when the training environment is richer (effectively making the generalisation less difficult). This is somewhat akin to our finding that the models were able to generalise in the Recombination-to-element condition as lesser combinations were left out. The key difference between Hill et al and our paper is that we were interested in the role of disentanglement in generalisation, which is not something that was explored there. We now cite this paper (Discussion) and highlight the need to explore more tasks, more training environments, more architectures, and the role of disentanglement in order to achieve more human-like generalisation. \\n\\n \\n\\nResponse to \\u201cThe paper raises the question... generalisation you articulate\\u201d: We agree with the reviewer\\u2019s comment that \\u201cDisentanglement \\u2260 compositional representations \\u2260 systematic generalization\\u201d. We now explicitly make this point in the General Discussion. As we mentioned above, we have now explicitly added an experiment where we start with perfectly disentangled representations and train the decoder to reconstruct based on these representations. As predicted by the reviewer, this model fails in the same conditions as the end-to-end models. However, we do not want to make the strong conclusion that we should focus on performance at the expense of worrying about representations. For example, we cite Hummel (2000) who argues that disentangled (localist) representations are useful (but not sufficient) for implementing symbolic computations needed for broader generalisation. We are not committed to this view either, but we do think that considering different architectures and different types of inductive biases that impact on representations is something that needs to be explored alongside focusing on performance in richer training environments.\"}",
"{\"title\": \"Response to AnonReviewer1\", \"comment\": \"The reviewers key concern was that we did not carry out enough experiments to rigorously test our hypotheses. This is a concern that was shared by other reviewers. Therefore, we have now run a set of new experiments testing the robustness of our findings. Please see our general comments to all reviewers where we describe these new experiments and findings.\\n\\n \\n\\nResponse to \\u201cThe combinatorial generalization task... generalization?\\u201d: The van Steenkiste et al. study shows that learning disentangled latent representations for images led to faster learning in a visual reasoning tasks that employed those images. But it is important to note that van Steenkiste et al. did not exclude specific training conditions in order to test combinatorial generalisation or extrapolation in the visual reasoning task. We do not dispute that learning disentangled representations may be useful for downstream tasks as it may improve sampling efficiency. However, performing combinatorial generalisation requires an ability to syntactically combine these disentangled representations to form new combinations, which current models lack. In fact, in another visual reasoning task Barrett et al. (2018) excluded specific training conditions and found that performance in the most difficult combinatorial generalisation conditions was \\u201cstrikingly poor\\u201d, even after they modified the model and training conditions to improve performance. The van Steenkiste et al. finding that disentangled representations improve learning in downstream tasks is important, but it does not provide evidence that these representations improve more difficult forms of generalisation. \\n\\nBarrett, D. G., Hill, F., Santoro, A., Morcos, A. S., & Lillicrap, T. (2018). Measuring abstract reasoning in neural networks. arXiv preprint arXiv:1807.04225. \\n\\n \\n\\nResponse to \\u201cPerhaps it would be interesting... model hyperparameters\\u201d: We did indeed vary random seeds in addition to beta values. The results we show in the manuscript are for the seeds that were able to accomplish highest performance and largest disentanglement. We have now made this clear in the manuscript (pg 4). It should also be noted that we have now run a simulation where perfectly disentangled representations used in our decoder model supported limited generalisation. This suggests that the there are no \\u03b2 values that would lead to higher levels of disentanglement that would in turn support better generalisation. \\n\\n \\n\\nResponse to \\u201cIn some place... generalisation\\u201d: We have now consistently adopted the English spelling, although in the search terms we include both so that paper is easier to find.\"}",
"{\"title\": \"Response to AnonReviewer2\", \"comment\": \"We are pleased to hear that the reviewer found the question posed in this manuscript interesting and liked our approach in investigating the issue. We have responded to the reviewer\\u2019s point 3 above in the general comments \\u2013 like this reviewer, other reviewers also felt that our study needed more experiments. Consequently, we have tested these results on two more models and another dataset. Our response to the other two major concerns of the reviewer is below:\", \"response_to_point_1\": \"We agree that visualising the relationship between disentanglement and generalisation needed a bit more work. Therefore, we have now replaced Table 1 with Figure 3, which plots the relation between D-Scores and NLL for various models and conditions. If generalisation improves as a result of disentanglement, NLL should decrease as disentanglement increases. This is clearly not the case for any of our test conditions. The only exception to this is the NLL score for the perfectly disentangled (decoder) model in the Recombination-to-Range condition. But even here, the lower NLL value is misleading as the models reconstruct crucial elements of the image (the combination of left-out generative factors) incorrectly. This can be seen by looking at the example reconstructions in Figures 2 and 4 in the main text and in the Appendix, Figures 8 and 10.\\n\\nRegarding the results in Table 2 (now Table 1), we think that the reviewer has misunderstood the results here. It only makes sense to compare the relationship between D-scores and NLL *within* a condition. The results in Table 2 (now Table 1) only present one D-score/NLL per condition. The reason why we include only a single experiment per condition is that we found that the models trained on the image composition task learned highly disentangled representations even with beta=1. In fact, using high beta values on the full dataset actually prevented the model from learning at all. Thus, we did not vary the level of beta in order to manipulate the level of disentanglement within each generalisation condition (as we did in image reconstruction experiments). The key result here is that, despite the high D-score, models failed in their reconstruction on the critical combinations in the Recombination-to-range and Extrapolation condition.\", \"response_to_point_2\": \"Thank you \\u2013 this is really useful feedback. We have now revised the section describing the taxonomy of generalization (pg. 2\\u20133). We have removed the \\u201cInterpolation\\u201d condition as this is indeed logically equivalent to Recombination-to-Element condition \\u2013 both exclude one combination. The reviewer is also correct about Figure 1 - it only illustrates the test conditions for the three-dimensional case. To show how this taxonomy generalises to higher dimensions (more than three generative factors) we now discuss each type of test using a vector notation where it is easy to see how Extrapolation and Recombination-to-Range are a lot more challenging than the Recombination-to-Element condition. The difference between different test conditions is the number of combinations (values and generative factors) that have been excluded. This notation also makes it clear that this taxonomy can indeed be generalised to the continuous case. Throughout the manuscript, we illustrate these different conditions with specific examples of generative factors and values that were excluded from the training set to test the models.\"}",
"{\"title\": \"Response to AnonReviewer3\", \"comment\": \"Thank you \\u2013 this is really useful feedback. As we indicate above in our general comments, we have now replicated our results for a second dataset 3DShapes and two more models \\u2013 Factor-VAE and a perfectly disentangled model. Our motivation in choosing these models and dataset has also been outlined above. \\n\\n \\n\\nResponse to \\u201cThe paper\\u2019s study is limited... generalise much better\\u201d: We agree with the reviewer that beta-VAEs have limitations in disentanglement. This motivated our choice of FactorVAE and the perfectly disentangled model for testing. As can be seen from Figure 3, these approaches lead to better disentanglement scores and still fail in the reconstruction of the correct combinations (Figures 2 and 4). We argue that the reason behind this is that disentanglement is not sufficient for generalisation. \\n\\n \\n\\nResponse to \\u201cThe study is on... ICML\\u201919\\u201d: Our work is indeed related to that in Locatello et al. There the question is about whether it is possible to learn disentangled representations, the consistency of different approaches in learning disentanglement and whether disentanglement leads to decreased sampling complexity for downstream tasks. Our study complements theirs by asking whether disentanglement necessarily leads to better combinatorial generalisation. Performing combinatorial generalisation is a key ability of human beings and our study challenges the assumption that obtaining disentangled representations are sufficient for doing this. We have now made the connection between our approach and theirs explicit in Section 1.1, pg 2. \\n\\n \\n\\nResponse to \\u201cFor image composition... the insights\\u201d: Thanks \\u2013 we have now provided more detail on this in Section Section 2.3, pg 8.\"}",
"{\"title\": \"Response to all reviewers:\", \"comment\": \"We would like to thank all the reviewers for the detailed and insightful comments. The most important comment made by all the reviewers is that our strong conclusions regarding disentanglement and generalisation are limited by the fact that we only tested a small number of models using one dataset. Accordingly, our findings might be idiosyncratic to the specific experiments we carried out. We detail our response to this point here and then respond to each reviewer separately on remaining concerns.\\n\\nTo test the robustness of our results, we have carried out a new set of simulations that included a new dataset (3DShapes) and two new models (FactorVAE and a decoder model) in addition to the two models (VAE, beta-VAE) that we tested before. We chose the FactorVAE model as this model has been explicitly designed to encourage independent distribution of representations, which makes it particularly well suited to understand the role of degree of disentanglement in generalisation. All models were applied to both the dSprites and 3DShapes datasets and we obtained the same pattern of results across all conditions: models fail to perform combinatorial generalization or extrapolation, except in the simplest (recombination-to-element) case. These new results are included in Section 2 and Figures 2, 3 and 4. \\n\\nThe new decoder model we introduce had perfect disentangled representations as we use the true latent values as inputs. We included the decoder model in order to addresses the concern that none of the models trained end-to-end learned perfectly disentangled representations, and this raises the question as to whether there is another (untested or undiscovered) model that would learn even more disentangled representations that would generalize better. The results with the decoder show that even perfectly disentangled representations fail in the same way. \\n\\nThe reviewers had also mentioned other datasets, such as 3DCars, 3DChairs and CelebA. We chose 3DShapes because this dataset explicitly lists the generative factor values for each of the images. This allowed us to selectively remove some combinations of generative factors from training to assess various forms of generalisation. \\n\\nThe fact that we obtained the same pattern of results in two quite different datasets for a range of different disentanglement values strengthens our conclusion that disentangled representations are not helpful for combinatorial generalisation in the models and tasks that we tested. But of course, it would be interesting to test a wider variety of datasets, models, and tasks in the future. One of the important take-home messages of our work is that future work needs to explore new tasks and new decoding architectures in order to improve generalisation.\"}",
"{\"title\": \"interesting, but limited study on the ability of disentanglement to generalize\", \"review\": \"Summary\\n\\nLearning disentangled representation is often considered an important step to achieve human-like generalization. This paper studies how the degree of disentanglement affects various forms of generalization. Variational autoencoders (VAEs) is trained with different levels of disentanglement on an unsupervised task by excluding combinations of generative factors during training. At test time the models are used to reconstruct the missing combinations in order to measure generalization performance. The paper shows that the models support only weak combinatorial generalization. The paper also tests the models in a more complex task which explicitly required independent generative factors to be controlled. The paper concludes that learning disentanglement representation is not sufficient for supporting more difficult forms of generalization.\\n\\nStrengths\\n\\nThe paper studies 4 types of generalization, interpolation, recombination to element, Recombination to range, extrapolation. \\n\\nIt shows beta-VAE can achieve reasonable generalization by interpolation, not the other three types. \\n\\nWeaknesses\\n\\nThe paper's study is limited to beta-VAE and dSprites dataset. However, it makes broad claims on the role of disentanglement in generalization.\\n\\nBeta-VAE has limitations in disentanglement. It is not clear other disentanglement approaches such as Wasserstein auto-encoder, InfoGAN-CR (ICML'20) would not generalize much better. \\n\\nThe study is on unsupervised disentanglement. Unsupervised disentanglement has inherent limitations, see \\\"Challenging Common Assumptions in the Unsupervised Learning of Disentangled Representations\\\" ICML'19.\\n\\nThe paper should conduct experimental studies on other datasets, e.g. those in the above reference.\\n\\nFor image composition tasks, it states \\\"concatenating input representations with the actions and linearly combining the resultant vectors\\\". It will be great to explain the insights.\\n\\nDecision\\n\\nThe paper has some interesting results on the role of disentanglement in generalization. However, the paper's study is very limited to specific model and a single dataset. Therefore, it is below acceptance threshold.\\n \\n----Post-revision update---\\n\\nThe authors have provided results on a second dataset 3DShapes and two more models \\u2013 Factor-VAE and a perfectly disentangled model. However, the construction results of the GT decoder is much worse that other models, see Figure 2; it does not reconstruct the details of the \\\"heart\\\" shape even for training and the edges of the \\\"square\\\" are not straight. This begs the question how good the GT decoder is. The open question is, what is the generalization capability of a GT decoder that can both reconstruct and disentangle perfectly?\\n\\nWasserstein auto-encoder has been shown to disentangle better and the regularization term is on the aggregate posterior instead of individual samples. Without results on WAE, the paper should refrain from making broad claims on disentanglement. Furthermore, it would be interesting to investigate GAN based approach such as InfoGAN-CR as well.\\n\\nFor an experimentation paper, it should be more thorough and go beyond just two shape datasets.\\n\\nI applaud the additional results the authors provided. I still think the paper is borderline (more toward 6 now). If it fixes the aforementioned weaknesses, I would recommend accept.\", \"rating\": \"5: Marginally below acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting and relevant analysis, but the conclusions aren't clear enough yet\", \"review\": \"Summary\\n---\\nA large body of work creates disentangled representations to improve\\ncombinatorial generalization. This paper distinguishes between 4 types of\\ngeneralization and shows that existing unsupervised disentanglement approaches\\ngeneralize worse to some and better to others.\\n\\n(introduction)\\nThere are 3 types of combinatorial generalization. Each requires a learner to\\ngeneralize to a set of instances where 0, 1, or 2 dimensions have been completely held out.\\nPrevious work has not distinguished between these kinds of generalization when\\ntesting how disentangled representations generalize. This work does that to\\nunderstand the relationship between disentanglement and combinatorial\\ngeneralization in a more fine grained manner.\\n\\n(approach)\\nThroughout this paper, beta-VAE and a recent variant are trained with varrying levels of\\ndisentanglement (controlled by beta) to reconstruct d-sprites images.\\nThese images contain simple shapes and are generated using 5 ground truth\\nlatent factors. The ground truth latent factors allow disentanglement to be\\nmeasured (using Eastwood and Williams 2018), essentially by checking whether\\nthe ground truth latent factors are linearly separable in the learned latent space.\\n\\n(experiment - plain reconstruction)\\n* reconstruction error differs for different types of combinatorial generalization (holding out fewer dimensions is easier)\\n* reconstruction error is not highly correlated with disentanglement\\n\\n(experiment - compositional reconstruction)\\nInstead of reconstructing the input, a version of the input with one attribute changed is generated.\\n* generation error differs for different types of combinatorial generalization (holding out fewer dimensions is easier)\\n\\n(conclusion)\\nUsually disentanglement is encouraged to achieve combinatorial generalization, but this paper presents a simple experiment where it doesn't do that.\\n\\n\\n\\nStrengths\\n---\\n\\nThe central claim of the paper may help clarify the disentanglement literature.\\n\\nIt seems very useful to taxonomize generalization in this way.\\n\\nThe writing and motivation is generally very clear. The figures are easy to understand and help demonstrate the narrative.\\n\\nThis paper aims to characterize an existing line of work in detail rather than proposing a new approach/dataset/etc. I like work of this nature and would like to see more like it.\\n\\n\\n\\nWeaknesses\\n---\\n\\n\\n1. The relationship between disentanglement and generalization is clearly or quantitatively demonstrated:\\n\\nThe most interesting claim in this paper is that disentanglement is not necessarily correlated with combinatorial generalization, but this claim is not clearly supported by the data.\\n\\n* The main support comes from table 1. Here higher D-score does not necessarily mean lower test NLL. This observation should be made quantitative, probably just by measuring correllation between D-score and test NLL.\\n\\n* Table 2 seems to contradict this claim. In that case higher D-score does mean lower test NLL.\\n\\n\\n2. The taxonomy of generalization is a bit too specific to be useful and a bit incoherent:\\n\\nThe difference between \\\"Interpolation\\\" and \\\"Recombination to element\\\" generalization\\nis not clear to me. Each of the purple and red cubes in figure 1a represents\\na combinations of rotation, shape, and translation factors.\\nIt may be that it makes a difference when some dimensions are categorial\\nand others are continuous, as in the Interpolation example, but this doesn't\\nseem to really solve the factor because continuous latent variables\\nare still latent variables. I see some vague intuition behind this distinction,\\nbut the paper does correctly identify the precise distinction.\\n\\nFurthermore, this taxonomy of generalization seems limited to me.\\nIt seems like \\\"Recombination to element\\\", \\\"Recombination to range\\\", and \\\"Extrapolation\\\"\\njust hold out a different number of dimensions (e.g., \\\"none\\\", \\\"rotation\\\", and \\\"shape and rotation\\\", respectively).\\nThis begs the question of what happens when there are 4 generative dimensions?\\nIs generalization when 3 of those are held out also called \\\"Extrapolation\\\"?\\n\\nI think more work needs to be done to create a taxonomy which precisely and clearly generalizes\\nto N latent factors and creates a more coherent distinction between combinatorial and\\nnon-combinatorial generalization.\\nHowever, I think it's possible to create a better taxonomy and that it\\nwill probably be very useful to do so.\\n\\n\\n3. The paper should test the idea more thoroughly, on more datasets and on more disentanglement approaches. For example, it could include other datasets or tasks with different ground truth factors of variation (e.g., 3D chairs [1]). It could also include more disentanglement approaches like [2].\\n\\n\\n[1]: M. Aubry, D. Maturana, A. Efros, B. Russell, and J. Sivic. Seeing 3d chairs: exemplar part-based 2d-3d\\nalignment using a large dataset of cad models. In CVPR, 2014.\\n[2]: Esmaeili, B. et al. \\u201cStructured Disentangled Representations.\\u201d AISTATS (2019).\\n\\n\\n\\n\\n\\nComments / Suggestions\\n---\\n\\nDescribe the disentanglement metric in more detail. From the beginning disentanglement is treated differently from combinatorial generalization. It's not immediately clear what disentanglement is that makes it different and why that's interesting to study. For example, initially one might think that beta-VAE is inherently disentangled.\\n\\nCan this taxonomy of generalization be generalized to continuous domains? For example, can it be generalized to any (typically continuous) hidden layer a neural net learns?\\n\\n\\n\\nPreliminary Evaluation\\n---\\n\\nClarity - The presentation is quite clear.\\nQuality - The claims are not quite well enough supported. The experiments that were run don't support a clear conclusion and more experiments should have been run to support a more general conclusion.\\nNovelty - I don't think anyone has catalogued the performance of disentanglement methods in terms of a generalization taxonomy.\\nSignificance - This paper might help clarify the disentanglement literature and more broadly help people think about combinatorial generalization.\\n\\nI like this paper because of its clarity, novelty, and significance. However, I think the quality concerns are significant enough that it shouldn't be accepted at this stage.\\n\\nFinal Evaluation (Post Rebuttal)\\n---\\nThe author response and accompanying paper revision clearly and effectively addressed each of the 3 main weaknesses I pointed out, so I raised my rating.\", \"rating\": \"7: Good paper, accept\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Interesting study, but needs more experiments\", \"review\": \"Summary:\\nThis paper studies the performance of models producing disentangled representations in the downstream task of combinatorial generalization. The experiments suggest that models producing disentangled representations do not generalize well enough.\", \"pros\": [\"The paper is well-written and easy to follow.\", \"The authors propose four novel benchmarks to systematically study the ability of a model to generalize.\"], \"concerns\": [\"The key concern is that the paper does not present enough experiments to support the authors' claims. The study was conducted only for one dataset; I would suggest to include several other datasets in your study, e.g., MPI 3D, Shapes 3D, Cars 3D datasets. Also, the results would be stronger if the paper presented the assessment of other disentanglement specific metrics; see, for example, MIG [1], Modularity [2], etc.\", \"Comments/questions:\", \"The combinatorial generalization task looks similar to the abstract reasoning task; it was shown that disentangled representations help in this downstream task [3]. How do you think, why it does not hold as well for combinatorial generalization?\", \"Perhaps it would be interesting to vary random seeds in addition to $\\\\beta$ values; it was shown in Locatello [4] that random seeds sometimes have a stronger influence on disentanglement scores than model hyperparameters.\"], \"minor_comments\": [\"In some places, you write \\\"generalization\\\", in other -- \\\"generalisation\\\".\"], \"upd\": \"The authors addressed my concerns and added additional experiments. The paper is improved, therefore, I increase the rating.\", \"references\": \"[1] Ricky TQ Chen, Xuechen Li, Roger B Grosse, and David K Duvenaud. Isolating sources of disen- tanglement in variational autoencoders. In Advances in Neural Information Processing Systems, pp. 2610\\u20132620, 2018.\\n\\n[2] Karl Ridgeway and Michael C Mozer. Learning deep disentangled embeddings with the f-statistic loss. In Advances in Neural Information Processing Systems, pp. 185\\u2013194, 2018.\\n\\n[3] van Steenkiste, Sjoerd, et al. \\\"Are Disentangled Representations Helpful for Abstract Visual Reasoning?.\\\" Advances in Neural Information Processing Systems. 2019.\\n\\n[4] Locatello, Francesco, et al. \\\"Challenging common assumptions in the unsupervised learning of disentangled representations.\\\" international conference on machine learning. 2019.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"A good start, but restricted experiments limit the conclusions, and the discussion could be refined\", \"review\": \"Post-revision update\\n------------------------\\n\\nThanks to the authors, I think that the revision provided by the authors makes the paper substantially stronger. The inclusion of the more complex Shapes3D dataset substantially improves the experiments, and I think the discussion has improved. I have revised my rating to a clear accept in accordance.\\n\\nOriginal Review\\n----------------------\\n\\nThis paper evaluates the role of disentanglement in generalization. The authors begin by articulating a useful distinction between different kinds of generalization by interpolation, recombination or extrapolation. They then train VAEs (and variations thereof) on a controlled, synthetic dataset, using two different training paradigms. They show that the models only generalize well to one of the most elementary types (recombination to element), but do not extrapolate well to the more difficult kinds of generalization. They also show that disentanglement does not seem to correlate with better generalization.\\n\\nI find this paper to be marginally above the acceptance threshold, but it has room for improvement (see below).\", \"strengths\": [\"Generalization and the role of compositionality and disentanglement therein are very important issues.\", \"I like the articulation of the different kinds of generalization, and the clear illustration thereof.\"], \"areas_for_improvement\": \"* The relationship of these results to Locatello et al. (2019) would be worth discussing further. They also showed that disentanglement did not necessarily lead to better generalization, with a wider range of experiments (though therefore perhaps less deep in evaluating different types of generalization).\\n\\n* The experiments are very narrow. The paper uses a single dataset (though with two different tasks), and it is very toy. As noted by Locatello et al. (2019), the inferences drawn from a single dataset may be very biased. While simple synthetic datasets can be useful for allowing more carefully controlled experiments, it would be useful to explore the same experiment\\ns using different datasets with different features (e.g. color and texture). It would also be useful to understand generaliza\\ntion in richer, more realistic datasets (see below).\\n\\n* It would be useful to show some of the major claims in a less opaque way. For example, to show the (non)relationship between disentanglement and generalization, the authors could make a plot with D-score on the x-axis, and generalization performance on the y-axis (with different plot panels for the different types of generalization, perhaps). \\n\\n* One reason that exploring other datasets is important is that the particular inductive biases of the models may facilitate generalization along certain feature dimensions. For example, the fact that convolutions are generally (relatively) translation-invariant but not rotation-invariant mean that the model might more easily extrapolate to unseen translations. In order to draw broad conclusions, it would be useful to both explore more diverse datasets and quantitatively analyze in more detail t\\nhe generalization along different dimensions.\\n\\n* Increasing realism can produce qualitative improvements in compositional generalization in some settings. For example, Hill et al. (2020) showed that e.g. generalization was better in a 3D setting than 2D setting, and that an RL agent showed 100% compositional generalization in a setting where a classifier only showed ~80%, for instance (although this generalization was recombination, not extrapolation). Thus, the poverty of the stimuli may alter the paper's conclusions. Even if your study is useful, it's worth discussing this limitation more explicitly.\\n\\n* The paper raises the question of why disentangled representations are not more effective in supporting compositional generalization, but it's worth asking why we assume that they would. Disentanglement $\\\\neq$ compositional representations $\\\\neq$ systematic generalization. Fodor & Pylyshyn suggest that compositional representations are necessary for systematic generalization, but they don't certainly don't provide an empirical definition of how to evaluate compositionality of representations. Indeed, it's hard to define such a notion: \\\"The question of whether a model [generalizes] according to compositional task structure is distinct from the question of whether the model\\u2019s representations exhibit compositional structure. Because the mapping from [...] representations to behavior is highly non-linear, it is difficult to craft a definition of compositional representations that is either necessary or sufficient for generalization\\\" (Lampinen & McClelland, 2020). Your results, along with those of Locatello et al (2019) and others, lend support for this argument. Disentanglement in some middle layer of the network does not seem to show a causal role in generalization, presumably in part because the processes intervening between that representation and the output are nonlinear, because that nonlinear decoder is also capable of failing for some combinations of latent representations even if the representations themselves are compositional, and/or because disentanglement is not a sufficient notion of compositionality. Given these difficulties, I think you could potentially extrapolate further than you do, to ask whether we should be worrying about representations at all, rather than just evaluating (and improving) behavioral performance on the different types of generalization you articulate.\\n\\n* However, even empirical evaluation is challenging in more naturalistic settings. The notion of \\\"disentanglement\\\" or \\\"composition\\\" may be harder to define in realistic datasets. It's not clear what the appropriate decomposition of a complex naturalistic image is \\u2014 objects are a natural place to start, which is why Higgins and others have focused on this type of decomposition. But what counts as disentangled in a visual scene of e.g. a forest? Is each leaf an object that must be disentangled in its own right? Should the color of each piece of bark on each tree be represented by its own dimension, since *in principle* it could vary independently? This seems unreasonable, which is perhaps why disentanglement is usually demonstrated on very simplistic datasets. Yet a human can of course *attend* to any particular aspect of the scene to disentangle that dimension as needed. In the real world, human-like performance might require the ability to construct *new decompositions on the fly,* because the appropriate decompositions may change as the task or data shifts. That is, the idea of seeking a priori disentanglement with respect to fixed dimensions might not be the right way to go about achieving human-like generalization, especially if we want that generalization to extrapolate to new data and new tasks. (C.f. Lampinen & McClelland, 2020 for some other related discussion.)\\n \\n* It also seems likely that the processes that allow humans to exhibit strong generalization may require extended or additional processing, rather than a single feed-forward pass as in a VAE. This would be necessary to allow the sort of attentive disentanglement described in the previous point. For example, the Stroop effect in cognition seems to me to illustrate feature entanglement, which requires higher level control processes to resolve the appropriate response. The paper does discuss the idea that other mechanisms or architectures might be involved in the discussion, but it seems it bears more elaboration, especially w.r.t. the above points about the definability of disentanglement with real world data.\\n\\n\\n\\nReferences\\n-----------\\n\\nHill, Felix, et al. \\\"Environmental drivers of systematicity and generalization in a situated agent.\\\" International Conference on Learning Representations. 2020.\\n\\nLampinen, Andrew K., and James L. McClelland. \\\"Transforming task representations to allow deep learning models to perform novel tasks.\\\" arXiv preprint arXiv:2005.04318 2020.\\n\\nLocatello, Francesco, et al. \\\"Challenging common assumptions in the unsupervised learning of disentangled representations.\\\" international conference on machine learning. 2019.\", \"rating\": \"8: Top 50% of accepted papers, clear accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
eU776ZYxEpz | Learning to live with Dale's principle: ANNs with separate excitatory and inhibitory units | [
"Jonathan Cornford",
"Damjan Kalajdzievski",
"Marco Leite",
"Amélie Lamarquette",
"Dimitri Michael Kullmann",
"Blake Aaron Richards"
] | The units in artificial neural networks (ANNs) can be thought of as abstractions of biological neurons, and ANNs are increasingly used in neuroscience research. However, there are many important differences between ANN units and real neurons. One of the most notable is the absence of Dale's principle, which ensures that biological neurons are either exclusively excitatory or inhibitory. Dale's principle is typically left out of ANNs because its inclusion impairs learning. This is problematic, because one of the great advantages of ANNs for neuroscience research is their ability to learn complicated, realistic tasks. Here, by taking inspiration from feedforward inhibitory interneurons in the brain we show that we can develop ANNs with separate populations of excitatory and inhibitory units that learn just as well as standard ANNs. We call these networks Dale's ANNs (DANNs). We present two insights that enable DANNs to learn well: (1) DANNs are related to normalization schemes, and can be initialized such that the inhibition centres and standardizes the excitatory activity, (2) updates to inhibitory neuron parameters should be scaled using corrections based on the Fisher Information matrix. These results demonstrate how ANNs that respect Dale's principle can be built without sacrificing learning performance, which is important for future work using ANNs as models of the brain. The results may also have interesting implications for how inhibitory plasticity in the real brain operates. | [
"anns",
"principle",
"dale",
"inhibitory units",
"separate excitatory",
"biological neurons",
"neuroscience research",
"excitatory",
"brain",
"danns"
] | Accept (Poster) | https://openreview.net/pdf?id=eU776ZYxEpz | https://openreview.net/forum?id=eU776ZYxEpz | ICLR.cc/2021/Conference | 2021 | {
"note_id": [
"67LPmicHqO",
"j7NIASdZFR5",
"ShXMptFeY_k",
"4SI986tQku4",
"P256FmBbcjK",
"H8Hq8Xau-gQ",
"2U7hfGLpTQu",
"8lo7UtDKhB",
"oFAiENJBv2g",
"6Gf0Um79_t7",
"X_9PXjgRW8l",
"xDPcWIL6voa",
"fqsN5cr1UkC",
"hVaJNmU49No",
"PXCX5Vt_W_B",
"UOaNyAsFi3",
"10hN6uh2oaE"
],
"note_type": [
"decision",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"official_comment",
"comment",
"official_review",
"official_review",
"official_review",
"official_review"
],
"note_created": [
1610040433795,
1606301456699,
1606277579106,
1606148663841,
1605734946214,
1605734339135,
1605733911330,
1605733770030,
1605733570744,
1605733347838,
1605733124919,
1605125356516,
1605123817431,
1604342465707,
1603898491444,
1603813858722,
1603691417749
],
"note_signatures": [
[
"ICLR.cc/2021/Conference/Program_Chairs"
],
[
"ICLR.cc/2021/Conference/Paper3565/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3565/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3565/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3565/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3565/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3565/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3565/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3565/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3565/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3565/Authors"
],
[
"ICLR.cc/2021/Conference/Paper3565/Authors"
],
[
"~Sven_Behnke1"
],
[
"ICLR.cc/2021/Conference/Paper3565/AnonReviewer2"
],
[
"ICLR.cc/2021/Conference/Paper3565/AnonReviewer4"
],
[
"ICLR.cc/2021/Conference/Paper3565/AnonReviewer3"
],
[
"ICLR.cc/2021/Conference/Paper3565/AnonReviewer1"
]
],
"structured_content_str": [
"{\"title\": \"Final Decision\", \"decision\": \"Accept (Poster)\", \"comment\": \"This paper was unanimously rated above the acceptance threshold by the\\nreviewers. While all reviewers agree it is worth accepting, they\\ndiffered in their enthusiasm. Most reviewers agree that major\\nlimitations of the paper include that the paper provides no insight into why\\nDale's principle exists and the actual results are not truly\\nstate-of-the-art. Nevertheless there is agreement that the paper\\npresents results worth publicizing to the ICLR audience. The comparison\\nof the inhibitory network to normalization schemes is interesting.\\nAlso, please reference the Neural Abstraction Pyramid work.\"}",
"{\"title\": \"response to further points\", \"comment\": \"Thank you for your additional response. Yes, you are correct, in our model I neurons only project within layer and only receive input from the layer below, which is inspired by feedforward inhibition in the brain. However, we have also added a section to discuss how our results apply to feedback inhibition and recurrent networks in the appendix section B. Please also see our response to reviewer 2, point 1) for further discussion of this issue.\\n\\nAs for I neuron connectivity, we would point out that this depends on interneuron subtype and brain region. There are a number of interneurons that mediate feedforward but not feedback inhibition, for example in the hippocampus Schaffer collateral associated interneurons and neurogliaform cells both receive extrinsic excitatory afferents but not axon collaterals of local excitatory cells. In addition, for a \\u201chypothetical average\\u201d interneuron in hippocampus CA1, it is estimated only ~10-20% of the total excitatory input is recurrent. (See table 26, Bezaire and Soltesz 2013, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3775914/). \\n\\nThank you for raising these points, we agree it would be beneficial to include a discussion of model and anatomy connectivity and propose to do so in the camera ready version of the paper if accepted.\"}",
"{\"title\": \"further points\", \"comment\": \"I thank the authors for addressing my review. I have more points regarding the connection to I neurons.\\nMy understanding is that, in the brain, I neurons mostly receive recurrent connections from the local circuits, not inputs from another brain area. And also I neurons typically do not project to another region. I neurons certainly can interact with other I neurons.\\n\\nBased on the authors reply, it appeared that in the proposed model, I neurons only receive inputs from the level below and there is no recurrent between the I neurons. Also, I neurons do not project to the next layer in the model, right?\\n\\nIf my understanding is correct, these discrepancies between the connectivity structure in the model and the anatomy should be carefully stated and discussed in the paper.\"}",
"{\"title\": \"Paper updated\", \"comment\": \"Again, we wish to thank the reviewers for their insightful comments on our paper. We have made the modifications we proposed and uploaded a new version of the paper. The new text is highlighted in purple in the revised pdf. We note that we have included the preliminary results for the extension to convolutional networks in the Appendix, but we expect to have more thorough results ready for a camera-ready version of our paper, if it is accepted. We feel that the paper is greatly improved, and we hope that the reviewers agree.\"}",
"{\"title\": \"Clarifying and expanding on several points 1/2\", \"comment\": \"We are very grateful to the reviewer for their thorough, fair and insightful comments on our paper. Here are our specific responses to the individual comments:\\n\\n*1) The role of the subtractive and divisive components need to be better explained. Are both of them necessary for getting the results shown later?*\\n\\nThis is a very interesting question. We can say with certainty that the subtractive component is critical, as it provides the ability to match the function approximation capabilities of a normal ANN. With respect to the divisive component, it may be less necessary, but still useful. Specifically, the divisive component is initialized to provide some of the same benefits as other normalization schemes. Notably, other normalization approaches help learning and appear to make the system more robust to initialization (e.g. Zhang et al. 2019, https://arxiv.org/pdf/1901.09321.pdf). But, normalisation is not necessary for learning, per se. Informally, we have observed similar properties with divisive inhibition in our model. However, we should note that in our model the specific equivalence to normalization is not enforced after initialization. Future work could examine whether additional advantages could be drawn from developing techniques for ensuring continued equivalence to existing normalization schemes (as mentioned in the Discussion). We propose to add more discussion of this matter to the manuscript.\\n\\n*2) The authors assume the number of E neurons is far larger than that of the I neurons. This is not quite true in physiology. The E/I ratio reported is often around 4:1. The authors assumed 10% of neurons are I neurons- this is on the smaller end. Another related concern is that, in cortex, despite of a smaller number, I neurons are often responsible for controlling the dynamics/computation due to the dense connectivity from I to E neurons. I am a little bit worried that the paper is studying a quite different regime, in which the E neurons are dominating. Also, would adding more I neurons decrease the performance of the network? If that is the case, that would be concerning.*\\n\\nThis is a good question, indeed, the reviewer is correct that the percentage of inhibitory neurons in cortical circuits is likely above 10%. We were simply being conservative in our choice of this number. But, it is important to ensure that our choice is not critical to our results. Given this, we have run new simulations with larger numbers of inhibitory neurons, such that the ratio is 4:1. We find that learning is in fact slightly better in this scenario (see attached figure 1, table 1 in https://pdfhost.io/v/AFBEMscCX_DANN_Preliminary_Responsepdf.pdf). We propose adding these results to the Appendix of the manuscript.\\n\\n*3) The initialization of E/I network has been carefully studied previously in the context of training balanced E/I recurrent neural networks (e.g., Ingrosso & Abbott, 2019, which the authors cited). How does the authors scheme different from the previous work*\\n\\nThis is an excellent question. The Ingrosso & Abbott (2019) paper is indeed closely related to our paper. However, the Ingrosso & Abbott paper is focussed on two issues: (1) the development of an alternative to recursive least squares training for networks with separate excitatory and inhibitory units, (2) the development of networks that maintain \\u201cdynamic\\u201d balance (meaning the E/I balance occurs without synaptic updates) versus \\u201cparametric\\u201d balance (wherein E/I balance requires synaptic plasticity). Their goal was not to design techniques for training ANNs with gradient descent, nor was it to develop networks that obey Dale\\u2019s principle but which match the learning performance of traditional ANNs. Thus, the goals of the two papers, though related, are ultimately different. We propose to add some discussion of these differences, as well as the differences to other related papers, to the introduction in order to clarify the unique contributions of our work. \\n\\n*4) The method assumes inhibitory units are linear units. Several questions arise. First, is this a mathematical issue or a numerical issues? Second, does this imply the firing rate of inhibitory neuron can be both positive and negative?*\\n\\nThese are important points to clarify. On the first, the choice was largely driven by ease of mathematical analysis. On the second, in our specific models, inhibitory neurons cannot have negative firing rates despite being linear. The reason is that they only receive positive inputs from the layer below and they do not have a bias term. Thus, their output is strictly non-negative. We propose to clarify this point in the manuscript. In future work that incorporates inhibitory inputs to the inhibitory units, we believe that using appropriate nonlinear functions for the inhibitory units will be important.\"}",
"{\"title\": \"Clarifying and expanding on several points 2/2\", \"comment\": \"*5) In fig4, DANN performs significantly worse than LayerNorm and BathNorm.*\\n\\nWe would argue that figure 4 shows equivalent performance on K-MNIST. But, the reviewer is correct that the DANN performance is not quite as good as LayerNorm and BatchNorm on Fashion-MNIST. However, we would note that they are actually quite close. For example, if the reviewer looks at Table 3, they will see that on the test set DANNs achieved an error rate of 10.962 +/- 0.365, compared to 10.445 +/- 0.455 for LayerNorm and 9.992 +/- 0.218 for BatchNorm, which is a real difference, but arguably not huge. For comparison, the column constrained ANN achieved only 14.986 +/- 0.674. Moreover, we would note that the DANN performance was within the standard deviation of the MLP performance.\\n\\n*6)The algorithms is not tested on slightly more challenging benchmark datasets such as CIFAR10 or ImageNet. Relatedly, would DANN scale up to larger networks?*\\n\\nWhile, to our knowledge, this is the first paper to show ANNs that obey Dale\\u2019s principle learning as well as standard ANNs on simple tasks, this is a very important question. In-line with our response to Reviewer 3, comment 3, we note that the corrections derived in the paper apply to convolutional networks, and we have been running experiments on deep convolutional DANNs trained on CIFAR-10. We have preliminary data (see attached figure 2 in https://pdfhost.io/v/AFBEMscCX_DANN_Preliminary_Responsepdf.pdf) suggesting that DANN convnets have performance approximately equal to that of standard convnets on this dataset. We intend on running a more thorough set of experiments following on this preliminary data (with full hyperparameter tuning), though this may take some time. We propose to add discussion of how to apply the DANN formalism to convnets, and if the reviewers feel that it is important, we can include the results of these experiments in the camera ready version of the manuscript.\\n\\n*7) Are their connections between the I neurons within the same layers?*\\n\\nNo, there are not. We will clarify this in the paper.\\n\\n*8) page 4, \\u201cUnlike a column constrained network, a layer in a DANN is not restricted in its potential function space. \\u201c - It is unclear what this sentence means\\u2026*\\n\\nWe can see how this sentence is unclear. What we mean by this is that in a column constrained network there are literally many functions that a single layer cannot approximate, because the linear operation is constrained to matrices with columns that have only positive or negative signs. In contrast, in DANNs, the initial linear integration in the excitatory units can match any linear function. We propose expanding this sentence to clarify this point.\\n\\n*9) Between Eq 4 and Eq 5, the authors mentioned the exponential family. What particular distribution was used? Gaussian or any exponential family distribution would produce similar results?*\\n\\nOur mathematical analysis only assumes any distribution from the natural exponential family (the exponential family with T(y) = y, see footnote 1). So, it applies equally to any such distribution. This group of distributions includes the Gaussian, Poisson, gamma, and binomial distributions. We propose to clarify this point in the footnote. \\n\\n*10) The authors wrote: \\u201cAs a result, inhibitory unit parameters updates are scaled down relative to excitatory parameter updates. This is intriguing given the differences between inhibitory and excitatory neuron plasticity\\u2026including the relative extent of weight changes in excitatory and inhibitory neurons (McBain et al., 1999). \\u201d I think these comparisons to the neuroscience literature are too vague and potentially mis-leading. To make this useful, it would helpful to make the comparison more specific and clear.*\\n\\nThis is a fair point. What we were referring to, ultimately, was the fact that inhibitory plasticity is fairly difficult to achieve experimentally. Indeed, in the past, many neuroscientists thought that it did not exist (such as McBain et al, 1999). Thus, all we intended to refer to here was the apparent reduced plasticity at inhibitory synapses in the brain. We will clarify this in this section.\\n\\n*11) I am worried that the experiments for the ColumnEi model was not treated fairly. In section 5.1, it is mentioned that 50 columns are negative. Did the authors try to make increase this number to see if the performance would be improved for the ColumnEi model?*\\n\\nThis is an important point to clarify. To test this question, we ran additional experiments with ColumnEi models that contain 100 negative columns. We find that these models learn just as poorly as the other ColumnEi models. Please see the attached figure 1, table 1 in https://pdfhost.io/v/AFBEMscCX_DANN_Preliminary_Responsepdf.pdf. We will include these results in the revised paper.\"}",
"{\"title\": \"Pushing the performance of Dale\\u2019s ANNs\", \"comment\": \"We are very happy that the reviewer found our paper insightful, and we thank them for their constructive critiques. Our responses are as follows:\\n\\n*1) Apparently this insight provides no benefit for designing ANN.*\\n\\nIndeed, as the reviewer notes here, we did not observe better performance with DANNs than with standard ANNs. Of course, the goal of this paper was to close the gap in learning performance between standard ANNs and ANNs that obey Dale\\u2019s principle. One reason that this is important is simply that ANNs that obey Dale\\u2019s principle, but which are not impaired at learning relative to normal ANNs, will be a useful tool for neuroscience research. However, we also wonder about potential computational benefits to Dale\\u2019s principle, and hope in future work to explore this possibility. We see this paper, which closes the learning gap, as a key initial step towards these future investigations. Please see also our reply to Reviewer 2, comment 2 for more discussion on this matter.\\n\\n*2) Furthermore the biological insight is rather limited because biological neural networks are not feedforward networks.*\\n\\nThe reviewer is correct that real neural circuits are typically recurrent, and thus, it would be beneficial to also consider how DANNs can operate within the recurrent context. However, thanks to the mathematical similarity between a multilayer feedforward neural network and a recurrent neural rolled out through time (see e.g. Liao and Poggio, 2016, https://arxiv.org/abs/1604.03640), making this connection is relatively straightforward. In fact, our formulation of DANNs is fully applicable to recurrent neural networks, thanks to these connections. We propose to add a section to the appendix describing how our formulations can be ported to the case of recurrent neural networks. If the reviewers agree that this is a good idea, we will include this in our revised manuscript. Please see also our response to Reviewer 2, comment 1.\\n\\n*3) Also, the chosen tasks (3 variations of MNIST) are relatively simple, and are solved with relatively shallow networks, with just 4 hidden layers. In my view this evaluation does not support the much more general claim in the Abstract that \\u201eANN\\u2019s that respect Dale\\u2019s principle can be built without sacrificing learning performance\\u201c.*\\n\\nWe agree that the tasks we explored here were relatively simple. However, as noted by Reviewer 4, despite these tasks being simple this is, to our knowledge, the first paper to show ANNs that obey Dale\\u2019s principle that can learn on these tasks as well as standard ANNs. Nonetheless, to expand on our results, we have been exploring the use of DANN style architectures in deep convolutional networks. First we note that the response of a convolutional network can be expressed as a normal matrix multiplication where the rows of the weight matrix correspond to convolutional filters, and the columns of the input matrix correspond to the different filter locations. As such, we can readily express the same DANN formulation for convolutional networks. Second, we have preliminary results showing that learning in DANN convnets is approximately as good as learning in regular convnets (see the attached figure 2 in https://pdfhost.io/v/AFBEMscCX_DANN_Preliminary_Responsepdf.pdf). We propose to add discussion of how to apply the DANN formalism to convnets, and if the reviewers feels it is important, we can include a more thorough version of this data (after appropriate hyperparameter optimization) in the final version of the paper. Though, we note that full hyperparameter optimization and experimentation will take some time, so the final results of these experiments may not be ready by next week.\"}",
"{\"title\": \"Checking our cortical chauvinism\", \"comment\": \"We thank the reviewer for their kind and constructive comments. We fully agree that non-cortical circuits can provide equal inspiration for these investigations and we will specifically add statements and references to that effect, e.g. noting the mushroom body of insects. Moreover, we will discuss the interesting observation that there may be very general principles at play with respect to maintaining balanced output via mutual inhibition, reference the paper the reviewer noted, and propose this as a future extension of our work.\"}",
"{\"title\": \"Incorporating recurrence and the question of computational advantage 2/2\", \"comment\": \"*4) Although the paper is generally well written, the authors could make it clearer. In particular, it would help if they defined symbols such as the circled dot or variables such as y when they are first used.*\\n\\nYes, we agree. We will update the text to ensure all symbols are properly defined.\"}",
"{\"title\": \"Incorporating recurrence and the question of computational advantage 1/2\", \"comment\": \"We thank the reviewer for their comments, and are happy that they found our paper interesting. The reviewer\\u2019s comments raise important questions. Our responses to these points are as follows:\\n\\n*1) Although feedforward inhibition has its place in the brain, most connections of inhibitory interneurons with excitatory neurons are reciprocal, resulting in feedback inhibition. Therefore, feedforward inhibition seems like a secondary factor here.*\\n\\nThe reviewer is correct that reciprocal/feedback inhibition is an important component of inhibition in the brain. We are not sure that it is fair to say that feedforward inhibition is a secondary factor, as there is ample evidence showing that feedforward inhibition is a critical, and plastic, regulator of responses in numerous circuits across the brain (see e.g. Pouille et al. 2009, Nature Neuroscience, 12:1577, 2009 or Hennequin et al. 2017, Annual Review of Neuroscience, 40:557-579 for a review related to plasticity). Indeed, the evidence suggests that feedforward inhibition can be the critical factor for determining early responses in neural circuits (Pouille & Scanziani, 2001, Science, 293: 1159\\u20131163). Of course, learning of feedback inhibition is also important, particularly for maintaining dynamic balance and for shaping responses over time (as also explained in Hennequin et al. 2017). Thus, the reviewer is correct that including feedback inhibition would be ideal. Importantly, though, we note though that our formulation for DANNs can still be applied to feedback inhibition. Recurrent neural networks obey many of the same mathematical principles as multi-layer feedforward neural networks (see e.g. Liao and Poggio, 2016, https://arxiv.org/abs/1604.03640). If we imagine unrolling a recurrent neural network with separate excitatory and inhibitory populations, then the feedback inhibition could be treated exactly like feedforward inhibition, but with \\u201clayers\\u201d corresponding to timesteps. Thus, all of our mathematical formulations and analyses would still hold for the unrolled recurrent network. Given this important point, if the reviewer agrees, we will add a section in a revised version of the Appendix explaining how our formulation of DANNs can be used to model feedback inhibition.\\n\\n*2) The DANNs are shown to be just no worse than ANNs that do not respect Dale\\u2019s rule. If biology \\u201cinvested the effort\\u201d to evolve inhibitory interneurons respecting Dale\\u2019s rule, this is probably because they confer a computational advantage, not just lack of disadavantage.*\\n\\nThis is a very interesting issue that the reviewer raised, and it generated a lot of discussion amongst the authors. After discussing the matter, what we would say is that it is unclear whether Dale\\u2019s principle represents an \\u201cinvestment of effort\\u201d by biology or not. Though it is easy to think about possible ways to avoid Dale\\u2019s principle using known physiological mechanisms, it may also represent an evolutionary local minima, whereby early phylogenetic choices led to constraints on the system that were difficult to evolve away. This is the opinion of some of the authors. However, the reviewer may also be right that Dale\\u2019s principle does confer a computational advantage to real brains, which is why evolution kept it around. This is, in fact, the opinion of the majority of the authors. We think that future work should investigate potential advantages to Dale\\u2019s principle more thoroughly. However, this was not the goal of this study, which was instead to solve the problem of ANNs with separate excitatory and inhibitory units performing worse when trained with gradient descent. Indeed, it is hard to see how we can understand the potential computational advantages of ANNs that obey Dale\\u2019s principle if they are actually poor at learning relative to normal ANNs. Thus, we see our work as a necessary first step to future studies that could more thoroughly explore potential advantages to Dale\\u2019s principle. If the reviewer thinks it is important to include in the paper, we would add discussion of this matter to a revised version.\\n\\n*3) The formulation of Dale\\u2019s rule on page 1 is not consistent with the current biological knowledge. A better version would be: \\u201cA neuron releases the same fast neurotransmitter at each of its pre-synaptic terminals\\u201d. Note that this does not mean that the action of a neuron is always excitatory or always inhibitory on all of its post-synaptic partners. It is possible, as often the case in invertebrates, that different post-synaptic partners have different receptors resulting in de- or hyper-polarization in different post-synaptic neurons.*\\n\\nWe agree with the reviewer, thank you for noting this point. We will adjust the language in the introduction to recognize the fact that there are neural circuits where the same neurotransmitters can affect different postsynaptic neurons differently.\"}",
"{\"title\": \"A wonderfully constructive set of critical reviews\", \"comment\": \"We thank all four reviewers for their thoughtful and insightful critiques/comments. They not only got us to examine the specific capabilities of our models more closely, they also initiated a number of interesting discussions between the authors.\\n\\nWe are in the process of re-writing the manuscript in order to incorporate all of the reviewer\\u2019s points. Below, the reviewers will find our responses to their specific comments which highlight the changes that we propose to make. They can also find preliminary extended results referenced in the comments here: (https://pdfhost.io/v/AFBEMscCX_DANN_Preliminary_Responsepdf.pdf ). Depending on the reviewer\\u2019s opinions of these proposed changes, we will upload a new version of the paper by Monday the 23rd, with these changes incorporated. We believe that with these modifications the paper will be stronger, and we thank the reviewers for their time and help on this.\"}",
"{\"title\": \"The issue is training...\", \"comment\": \"Thanks for letting us know about your work. Looks interesting! We will look over it to see how it applies to our work.\\n\\nHowever, please note, we never claim in the paper that ANNs with separate inhibitory and excitatory units are novel. We know others have done that before. The key point for our paper is that ANNs with separate E and I neurons typically don't learn *as well as* standard ANNs when you apply gradient descent to them. Our paper addresses this by developing techniques for getting good gradient-based learning even when there are separate E and I populations.\\n\\nThis is our key contribution, not simply having separate E and I populations.\"}",
"{\"title\": \"ANNs with separate excitatory and inhibitory units are not new\", \"comment\": \"ANNs with separate excitatory and inhibitory units are not new.\\nOne example of these is the Neural Abstraction Pyramid, which observed Dale's principle by using separate specific excitatory and fewer unspecific inhibitory units.\\n\\nThe Neural Abstraction Pyramid is a hierarchical recurrent convolutional neural architecture, for which unsupervised learning of multi-level feature hierarchies and supervised training for multiple computer vision problems,\\nincluding object detection, semantic segmentation, and image reconstruction has been demonstrated.\\n\\nKey idea of the Neural Abstraction Pyramid is to iteratively incorporate partial interpretations as context in order to resolve local ambiguities.\", \"the_best_reference_to_the_neural_abstraction_pyramid_is_sven_behnke\": \"\\\"Hierarchical Neural Networks for Image Interpretation\\\",\\nLNCS 2766 , Springer, 2003:\", \"https\": \"//www.ais.uni-bonn.de/books/LNCS2766.pdf\"}",
"{\"title\": \"Showing that Dale's principle does not hurt the performance of feedforward ANNs does not illuminate its computational purpose\", \"review\": \"Inspired by the observations of feedforward inhibition in the brain, the authors propose a novel ANN architecture that respects Dale\\u2019s rule (DANN). They provide two improvements for training DANNs: better initialization and update scaling for synaptic weights. As a result, they empirically demonstrate that DANNs perform no worse than the ANNs that do not respect Dale\\u2019s rule.\\n\\nAlthough, I find the contribution interesting, my enthusiasm is tempered by the following two issues:\\n\\n1.\\tAlthough feedforward inhibition has its place in the brain, most connections of inhibitory interneurons with excitatory neurons are reciprocal, resulting in feedback inhibition. Therefore, feedforward inhibition seems like a secondary factor here.\\n\\n2.\\tThe DANNs are shown to be just no worse than ANNs that do not respect Dale\\u2019s rule. If biology \\u201cinvested the effort\\u201d to evolve inhibitory interneurons respecting Dale\\u2019s rule, this is probably because they confer a computational advantage, not just lack of disadavantage. \\n\\nThe formulation of Dale\\u2019s rule on page 1 is not consistent with the current biological knowledge. A better version would be: \\u201cA neuron releases the same fast neurotransmitter at each of its pre-synaptic terminals\\u201d. Note that this does not mean that the action of a neuron is always excitatory or always inhibitory on all of its post-synaptic partners. It is possible, as often the case in invertebrates, that different post-synaptic partners have different receptors resulting in de- or hyper-polarization in different post-synaptic neurons. \\n \\nAlthough the paper is generally well written, the authors could make it clearer. In particular, it would help if they defined symbols such as the circled dot or variables such as y when they are first used.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"3: The reviewer is fairly confident that the evaluation is correct\"}",
"{\"title\": \"An interesting submission on training ANNs with E/I neuronal division\", \"review\": \"Most neurons in the brains are either excitatory (E) or inhibitory (I) - sometimes referred to as Dale\\u2019s law. Practically Dale\\u2019s principle is often left out of Artificial Neural Networks (ANNs) because having the E and I separation often impairs learning, although this has not been well documented in the literature (probably due to that this is also interpreted as a negative result). In this paper, the authors propose a new scheme to construct and train the feedforward E/I network by incorporating several ingredients, including feedforward inhibition and E/I balance among others. It is shown that this particular kind of E/I networks (DANNs) trained on MNIST and variations of MNIST could achieve a level of performance that is comparable to those without E/I separation.\", \"quality\": \"I think this is an interesting submission of good quality, with some novel ideas and promising preliminary results.\", \"clarity\": \"The writing is generally clear.\", \"originality\": \"As far as I can tell, the results are original.\", \"significance\": \"Although the results are promising, I have reservations about the significance of these results as the performance of the models are still worst than the standard ANNs.\", \"pros\": \"1.To my knowledge, this is the first E/I network that could achieve comparable performance with the standard ANN model on MNIST task (although at the same time, I have to say that not too many papers have studied and reported this issue).\\n2. The ingredients in the proposed model is well motivated in neuroscience, such as the feedforward inhibition, and E/I balance, as well as no connections between I neurons across the different layers.\\n3. The results on the MNIST and its variations look promising. \\n4. The paper is fairly well written and the basic ideas are clear.\", \"cons\": \"1.The role of the subtractive and divisive components need to be better explained. Are both of them necessary for getting the results shown later?\\n2. The authors assume the number of E neurons is far larger than that of the I neurons. This is not quite true in physiology. The E/I ratio reported is often around 4:1. The authors assumed 10% of neurons are I neurons- this is on the smaller end. Another related concern is that, in cortex, despite of a smaller number, I neurons are often responsible for controlling the dynamics/computation due to the dense connectivity from I to E neurons. I am a little bit worried that the paper is studying a quite different regime, in which the E neurons are dominating. Also, would adding more I neurons decrease the performance of the network? If that is the case, that would be concerning.\\n3. The initialization of E/I network has been carefully studied previously in the context of training balanced E/I recurrent neural networks (e.g., Ingrosso & Abbott, 2019, which the authors cited). How does the authors scheme different from the previous work?\\n4. The method assumes inhibitory units are linear units. Several questions arise. First, is this a mathematical issue or a numerical issues? Second, does this imply the firing rate of inhibitory neuron can be both positive and negative?\\n5. In fig4, DANN performs significantly worse than LayerNorm and BathNorm.\\n6.The algorithms is not tested on slightly more challenging benchmark datasets such as CIFAR10 or ImageNet. Relatedly, would DANN scale up to larger networks?\", \"questions_to_be_clarified\": \"*Are their connections between the I neurons within the same layers?\\n*page 4, \\u201cUnlike a column constrained network, a layer in a DANN is not restricted in its potential function space. \\u201c - It is unclear what this sentence means\\u2026\\n*Between Eq 4 and Eq 5, the authors mentioned the exponential family. What particular distribution was used? Gaussian or any exponential family distribution would produce similar results?\\n*The authors wrote: \\u201cAs a result, inhibitory unit parameters updates are scaled down relative to excitatory parameter updates. This is intriguing given the differences between inhibitory and excitatory neuron plasticity\\u2026including the relative extent of weight changes in excitatory and inhibitory neurons (McBain et al., 1999). \\u201d I think these comparisons to the neuroscience literature are too vague and potentially mis-leading. To make this useful, it would helpful to make the comparison more specific and clear. \\n*I am worried that the experiments for the ColumnEi model was not treated fairly. In section 5.1, it is mentioned that 50 columns are negative. Did the authors try to make increase this number to see if the performance would be improved for the ColumnEi model?\\n\\n\\n*********updated after rebuttal period\\nI still consider this as an interesting contribution, and stand with my original rating. \\nIt would be useful if the discrepancies and similarity between the connectivity structures in the model and the anatomy could be more carefully discussed in the paper.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Dale's principle may not reduce the performance of feedforward ANNs if one uses negative weights only for feedforward inhibition.\", \"review\": \"Summary: It is shown that Dale\\u2019s principle can be observed in feedfoward ANNs if one uses inhibitory neurons in the form of feedforward inhibition, while the other neurons are purely excitatory.\", \"pros\": \"This is a nice and new insight. It appears to be useful for understanding the design of biological neural networks, and at least one type of uses of inhibitory neurons in them.\", \"cons\": \"Apparently this insight provides no benefit for designing ANN. Furthermore the biological insight is rather limited because biological neural networks are not feedforward networks. Also, the chosen tasks (3 variations of MNIST) are relatively simple, and are solved with relatively shallow networks, with just 4 hidden layers. In my view this evaluation does not support the much more general claim in the Abstract that \\u201eANN\\u2019s that respect Dale\\u2019s principle can be built without sacrificing learning performance\\u201c.\", \"rating\": \"6: Marginally above acceptance threshold\", \"confidence\": \"4: The reviewer is confident but not absolutely certain that the evaluation is correct\"}",
"{\"title\": \"Gain modulation of inhibitory feedforward inhibition\", \"review\": \"This is a great investigation on how to scale the gain of the inhibitory weights to balance the impact that the changes that the excitatory and inhibitory connections have on the layer\\u2019s output. I think using the KL distance that naturally connects with the Fisher Information is neat. I appreciate the effort that the authors make to connect the manner neural circuits are designed and connect it with ANN. You never know when the breakthrough can arise.\\n\\nI love the experiments that the authors present illustrating with clarity the impact that having the proper gain modulation of the inhibitory changes have in the speed of convergence.\\n\\nMy single constructive criticism is that the inspiration in cortical circuits do not prevent the authors to get inspiration from smaller neural circuits like in insects for example. The Mushroom Bodies of the insects are the equivalent of the cortex and present feedforward inhibition. The number of layers is much smaller but the neural principles that operate are fairly consistent across multiple animal species. Drawing from that experience, the mutual inhibition within layer may provide a natural mechanism to keep balance in the output distribution as shown for example in mean field models that investigate the regulation of activity in a dynamical neural layer (see for example https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1003133). \\n\\nOther that this comment I learn and enjoy from reading this paper. I think it should be accepted.\", \"rating\": \"9: Top 15% of accepted papers, strong accept\", \"confidence\": \"5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature\"}"
]
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.